WorldWideScience

Sample records for time variable model

  1. Verification of models for ballistic movement time and endpoint variability.

    Science.gov (United States)

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  2. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  3. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  4. Variable slip wind generator modeling for real-time simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gagnon, R.; Brochu, J.; Turmel, G. [Hydro-Quebec, Varennes, PQ (Canada). IREQ

    2006-07-01

    A model of a wind turbine using a variable slip wound-rotor induction machine was presented. The model was created as part of a library of generic wind generator models intended for wind integration studies. The stator winding of the wind generator was connected directly to the grid and the rotor was driven by the turbine through a drive train. The variable resistors was synthesized by an external resistor in parallel with a diode rectifier. A forced-commutated power electronic device (IGBT) was connected to the wound rotor by slip rings and brushes. Simulations were conducted in a Matlab/Simulink environment using SimPowerSystems blocks to model power systems elements and Simulink blocks to model the turbine, control system and drive train. Detailed descriptions of the turbine, the drive train and the control system were provided. The model's implementation in the simulator was also described. A case study demonstrating the real-time simulation of a wind generator connected at the distribution level of a power system was presented. Results of the case study were then compared with results obtained from the SimPowerSystems off-line simulation. Results showed good agreement between the waveforms, demonstrating the conformity of the real-time and the off-line simulations. The capability of Hypersim for real-time simulation of wind turbines with power electronic converters in a distribution network was demonstrated. It was concluded that hardware-in-the-loop (HIL) simulation of wind turbine controllers for wind integration studies in power systems is now feasible. 5 refs., 1 tab., 6 figs.

  5. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  6. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  7. Electrical Activity in a Time-Delay Four-Variable Neuron Model under Electromagnetic Induction

    Directory of Open Access Journals (Sweden)

    Keming Tang

    2017-11-01

    Full Text Available To investigate the effect of electromagnetic induction on the electrical activity of neuron, the variable for magnetic flow is used to improve Hindmarsh–Rose neuron model. Simultaneously, due to the existence of time-delay when signals are propagated between neurons or even in one neuron, it is important to study the role of time-delay in regulating the electrical activity of the neuron. For this end, a four-variable neuron model is proposed to investigate the effects of electromagnetic induction and time-delay. Simulation results suggest that the proposed neuron model can show multiple modes of electrical activity, which is dependent on the time-delay and external forcing current. It means that suitable discharge mode can be obtained by selecting the time-delay or external forcing current, which could be helpful for further investigation of electromagnetic radiation on biological neuronal system.

  8. Three-Factor Market-Timing Models with Fama and French's Spread Variables

    Directory of Open Access Journals (Sweden)

    Joanna Olbryś

    2010-01-01

    Full Text Available The traditional performance measurement literature has attempted to distinguish security selection, or stock-picking ability, from market-timing, or the ability to predict overall market returns. However, the literature finds that it is not easy to separate ability into such dichotomous categories. Some researchers have developed models that allow the decomposition of manager performance into market-timing and selectivity skills. The main goal of this paper is to present modified versions of classical market-timing models with Fama and French’s spread variables SMB and HML, in the case of Polish equity mutual funds. (original abstract

  9. The first-passage time distribution for the diffusion model with variable drift

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, Miriam; Gondan, Matthias

    2017-01-01

    across trials. This extra flexibility allows accounting for slow errors that often occur in response time experiments. So far, the predicted response time distributions were obtained by numerical evaluation as analytical solutions were not available. Here, we present an analytical expression...... for the cumulative first-passage time distribution in the diffusion model with normally distributed trial-to-trial variability in the drift. The solution is obtained with predefined precision, and its evaluation turns out to be extremely fast.......The Ratcliff diffusion model is now arguably the most widely applied model for response time data. Its major advantage is its description of both response times and the probabilities for correct as well as incorrect responses. The model assumes a Wiener process with drift between two constant...

  10. Modelling a variable valve timing spark ignition engine using different neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beham, M. [BMW AG, Munich (Germany); Yu, D.L. [John Moores University, Liverpool (United Kingdom). Control Systems Research Group

    2004-10-01

    In this paper different neural networks (NN) are compared for modelling a variable valve timing spark-ignition (VVT SI) engine. The overall system is divided for each output into five neural multi-input single output (MISO) subsystems. Three kinds of NN, multilayer Perceptron (MLP), pseudo-linear radial basis function (PLRBF), and local linear model tree (LOLIMOT) networks, are used to model each subsystem. Real data were collected when the engine was under different operating conditions and these data are used in training and validation of the developed neural models. The obtained models are finally tested in a real-time online model configuration on the test bench. The neural models run independently of the engine in parallel mode. The model outputs are compared with process output and compared among different models. These models performed well and can be used in the model-based engine control and optimization, and for hardware in the loop systems. (author)

  11. Commuters’ valuation of travel time variability in Barcelona

    OpenAIRE

    Javier Asensio; Anna Matas

    2007-01-01

    The value given by commuters to the variability of travel times is empirically analysed using stated preference data from Barcelona (Spain). Respondents are asked to choose between alternatives that differ in terms of cost, average travel time, variability of travel times and departure time. Different specifications of a scheduling choice model are used to measure the influence of various socioeconomic characteristics. Our results show that travel time variability.

  12. Travel time variability and rational inattention

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Jiang, Gege

    2017-01-01

    This paper sets up a rational inattention model for the choice of departure time for a traveler facing random travel time. The traveler chooses how much information to acquire about the travel time out-come before choosing departure time. This reduces the cost of travel time variability compared...

  13. Time variability of X-ray binaries: observations with INTEGRAL. Modeling

    International Nuclear Information System (INIS)

    Cabanac, Clement

    2007-01-01

    The exact origin of the observed X and Gamma ray variability in X-ray binaries is still an open debate in high energy astrophysics. Among others, these objects are showing aperiodic and quasi-periodic luminosity variations on timescales as small as the millisecond. This erratic behavior must put constraints on the proposed emission processes occurring in the vicinity of the neutrons star or the stellar mass black-hole held by these objects. We propose here to study their behavior following 3 different ways: first we examine the evolution of a particular X-ray source discovered by INTEGRAL, IGR J19140+0951. Using timing and spectral data given by different instruments, we show that the source type is plausibly consistent with a High Mass X-ray Binary hosting a neutrons star. Subsequently, we propose a new method dedicated to the study of timing data coming from coded mask aperture instruments. Using it on INTEGRAL/ISGRI real data, we detect the presence of periodic and quasi-periodic features in some pulsars and micro-quasars at energies as high as a hundred keV. Finally, we suggest a model designed to describe the low frequency variability of X-ray binaries in their hardest state. This model is based on thermal comptonization of soft photons by a warm corona in which a pressure wave is propagating in cylindrical geometry. By computing both numerical simulations and analytical solution, we show that this model should be suitable to describe some of the typical features observed in X-ray binaries power spectra in their hard state and their evolution such as aperiodic noise and low frequency quasi-periodic oscillations. (author) [fr

  14. Pulse timing for cataclysmic variables

    International Nuclear Information System (INIS)

    Chester, T.J.

    1979-01-01

    It is shown that present pulse timing measurements of cataclysmic variables can be explained by models of accretion disks in these systems, and thus such measurements can constrain disk models. The model for DQ Her correctly predicts the amplitude variation of the continuum pulsation and can also perhaps explain the asymmetric amplitude of the pulsed lambda4686 emission line. Several other predictions can be made from the model. In particular, if pulse timing measurements that resolve emission lines both in wavelength and in binary phase can be made, the projected orbital radius of the white dwarf could be deduced

  15. A stochastic fractional dynamics model of space-time variability of rain

    Science.gov (United States)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  16. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    Science.gov (United States)

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist

  17. Effects of Variable Production Rate and Time-Dependent Holding Cost for Complementary Products in Supply Chain Model

    Directory of Open Access Journals (Sweden)

    Mitali Sarkar

    2017-01-01

    Full Text Available Recently, a major trend is going to redesign a production system by controlling or making variable the production rate within some fixed interval to maintain the optimal level. This strategy is more effective when the holding cost is time-dependent as it is interrelated with holding duration of products and rate of production. An effort is made to make a supply chain model (SCM to show the joint effect of variable production rate and time-varying holding cost for specific type of complementary products, where those products are made by two different manufacturers and a common retailer makes them bundle and sells bundles to end customers. Demand of each product is specified by stochastic reservation prices with a known potential market size. Those players of the SCM are considered with unequal power. Stackelberg game approach is employed to obtain global optimum solution of the model. An illustrative numerical example, graphical representation, and managerial insights are given to illustrate the model. Results prove that variable production rate and time-dependent holding cost save more than existing literature.

  18. Constructing the reduced dynamical models of interannual climate variability from spatial-distributed time series

    Science.gov (United States)

    Mukhin, Dmitry; Gavrilov, Andrey; Loskutov, Evgeny; Feigin, Alexander

    2016-04-01

    We suggest a method for empirical forecast of climate dynamics basing on the reconstruction of reduced dynamical models in a form of random dynamical systems [1,2] derived from observational time series. The construction of proper embedding - the set of variables determining the phase space the model works in - is no doubt the most important step in such a modeling, but this task is non-trivial due to huge dimension of time series of typical climatic fields. Actually, an appropriate expansion of observational time series is needed yielding the number of principal components considered as phase variables, which are to be efficient for the construction of low-dimensional evolution operator. We emphasize two main features the reduced models should have for capturing the main dynamical properties of the system: (i) taking into account time-lagged teleconnections in the atmosphere-ocean system and (ii) reflecting the nonlinear nature of these teleconnections. In accordance to these principles, in this report we present the methodology which includes the combination of a new way for the construction of an embedding by the spatio-temporal data expansion and nonlinear model construction on the basis of artificial neural networks. The methodology is aplied to NCEP/NCAR reanalysis data including fields of sea level pressure, geopotential height, and wind speed, covering Northern Hemisphere. Its efficiency for the interannual forecast of various climate phenomena including ENSO, PDO, NAO and strong blocking event condition over the mid latitudes, is demonstrated. Also, we investigate the ability of the models to reproduce and predict the evolution of qualitative features of the dynamics, such as spectral peaks, critical transitions and statistics of extremes. This research was supported by the Government of the Russian Federation (Agreement No. 14.Z50.31.0033 with the Institute of Applied Physics RAS) [1] Y. I. Molkov, E. M. Loskutov, D. N. Mukhin, and A. M. Feigin, "Random

  19. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  20. An Epidemic Model of Computer Worms with Time Delay and Variable Infection Rate

    Directory of Open Access Journals (Sweden)

    Yu Yao

    2018-01-01

    Full Text Available With rapid development of Internet, network security issues become increasingly serious. Temporary patches have been put on the infectious hosts, which may lose efficacy on occasions. This leads to a time delay when vaccinated hosts change to susceptible hosts. On the other hand, the worm infection is usually a nonlinear process. Considering the actual situation, a variable infection rate is introduced to describe the spread process of worms. According to above aspects, we propose a time-delayed worm propagation model with variable infection rate. Then the existence condition and the stability of the positive equilibrium are derived. Due to the existence of time delay, the worm propagation system may be unstable and out of control. Moreover, the threshold τ0 of Hopf bifurcation is obtained. The worm propagation system is stable if time delay is less than τ0. When time delay is over τ0, the system will be unstable. In addition, numerical experiments have been performed, which can match the conclusions we deduce. The numerical experiments also show that there exists a threshold in the parameter a, which implies that we should choose appropriate infection rate β(t to constrain worm prevalence. Finally, simulation experiments are carried out to prove the validity of our conclusions.

  1. From Transition Systems to Variability Models and from Lifted Model Checking Back to UPPAAL

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Wasowski, Andrzej

    2017-01-01

    efficient lifted (family-based) model checking for real-time variability models. This reduces the cost of maintaining specialized family-based real-time model checkers. Real-time variability models can be model checked using the standard UPPAAL. We have implemented abstractions as syntactic source...

  2. [Modelling the effect of local climatic variability on dengue transmission in Medellin (Colombia) by means of time series analysis].

    Science.gov (United States)

    Rúa-Uribe, Guillermo L; Suárez-Acosta, Carolina; Chauca, José; Ventosilla, Palmira; Almanza, Rita

    2013-09-01

    Dengue fever is a major impact on public health vector-borne disease, and its transmission is influenced by entomological, sociocultural and economic factors. Additionally, climate variability plays an important role in the transmission dynamics. A large scientific consensus has indicated that the strong association between climatic variables and disease could be used to develop models to explain the incidence of the disease. To develop a model that provides a better understanding of dengue transmission dynamics in Medellin and predicts increases in the incidence of the disease. The incidence of dengue fever was used as dependent variable, and weekly climatic factors (maximum, mean and minimum temperature, relative humidity and precipitation) as independent variables. Expert Modeler was used to develop a model to better explain the behavior of the disease. Climatic variables with significant association to the dependent variable were selected through ARIMA models. The model explains 34% of observed variability. Precipitation was the climatic variable showing statistically significant association with the incidence of dengue fever, but with a 20 weeks delay. In Medellin, the transmission of dengue fever was influenced by climate variability, especially precipitation. The strong association dengue fever/precipitation allowed the construction of a model to help understand dengue transmission dynamics. This information will be useful to develop appropriate and timely strategies for dengue control.

  3. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  4. Periodicity and stability for variable-time impulsive neural networks.

    Science.gov (United States)

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models.

    Science.gov (United States)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M

    2017-12-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.

  6. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  7. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  8. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting

    Science.gov (United States)

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-01-01

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930

  9. Internal variability of a 3-D ocean model

    Directory of Open Access Journals (Sweden)

    Bjarne Büchmann

    2016-11-01

    Full Text Available The Defence Centre for Operational Oceanography runs operational forecasts for the Danish waters. The core setup is a 60-layer baroclinic circulation model based on the General Estuarine Transport Model code. At intervals, the model setup is tuned to improve ‘model skill’ and overall performance. It has been an area of concern that the uncertainty inherent to the stochastical/chaotic nature of the model is unknown. Thus, it is difficult to state with certainty that a particular setup is improved, even if the computed model skill increases. This issue also extends to the cases, where the model is tuned during an iterative process, where model results are fed back to improve model parameters, such as bathymetry.An ensemble of identical model setups with slightly perturbed initial conditions is examined. It is found that the initial perturbation causes the models to deviate from each other exponentially fast, causing differences of several PSUs and several kelvin within a few days of simulation. The ensemble is run for a full year, and the long-term variability of salinity and temperature is found for different regions within the modelled area. Further, the developing time scale is estimated for each region, and great regional differences are found – in both variability and time scale. It is observed that periods with very high ensemble variability are typically short-term and spatially limited events.A particular event is examined in detail to shed light on how the ensemble ‘behaves’ in periods with large internal model variability. It is found that the ensemble does not seem to follow any particular stochastic distribution: both the ensemble variability (standard deviation or range as well as the ensemble distribution within that range seem to vary with time and place. Further, it is observed that a large spatial variability due to mesoscale features does not necessarily correlate to large ensemble variability. These findings bear

  10. MODELING THE TIME VARIABILITY OF SDSS STRIPE 82 QUASARS AS A DAMPED RANDOM WALK

    International Nuclear Information System (INIS)

    MacLeod, C. L.; Ivezic, Z.; Bullock, E.; Kimball, A.; Sesar, B.; Westman, D.; Brooks, K.; Gibson, R.; Becker, A. C.; Kochanek, C. S.; Kozlowski, S.; Kelly, B.; De Vries, W. H.

    2010-01-01

    We model the time variability of ∼9000 spectroscopically confirmed quasars in SDSS Stripe 82 as a damped random walk (DRW). Using 2.7 million photometric measurements collected over 10 yr, we confirm the results of Kelly et al. and Kozlowski et al. that this model can explain quasar light curves at an impressive fidelity level (0.01-0.02 mag). The DRW model provides a simple, fast (O(N) for N data points), and powerful statistical description of quasar light curves by a characteristic timescale (τ) and an asymptotic rms variability on long timescales (SF ∞ ). We searched for correlations between these two variability parameters and physical parameters such as luminosity and black hole mass, and rest-frame wavelength. Our analysis shows SF ∞ to increase with decreasing luminosity and rest-frame wavelength as observed previously, and without a correlation with redshift. We find a correlation between SF ∞ and black hole mass with a power-law index of 0.18 ± 0.03, independent of the anti-correlation with luminosity. We find that τ increases with increasing wavelength with a power-law index of 0.17, remains nearly constant with redshift and luminosity, and increases with increasing black hole mass with a power-law index of 0.21 ± 0.07. The amplitude of variability is anti-correlated with the Eddington ratio, which suggests a scenario where optical fluctuations are tied to variations in the accretion rate. However, we find an additional dependence on luminosity and/or black hole mass that cannot be explained by the trend with Eddington ratio. The radio-loudest quasars have systematically larger variability amplitudes by about 30%, when corrected for the other observed trends, while the distribution of their characteristic timescale is indistinguishable from that of the full sample. We do not detect any statistically robust differences in the characteristic timescale and variability amplitude between the full sample and the small subsample of quasars detected

  11. Timing variability in children with early-treated congenital hypothyroidism

    NARCIS (Netherlands)

    Kooistra, L.; Snijders, T.A.B.; Schellekens, J.M.H.; Kalverboer, A.F.; Geuze, R.H.

    This study reports on central and peripheral determinants of timing variability in self-paced tapping by children with early-treated congenital hypothyroidism (CH). A theoretical model of the timing of repetitive movements developed by Wing and Kristofferson was applied to estimate the central

  12. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  13. Modelling and multi-objective optimization of a variable valve-timing spark-ignition engine using polynomial neural networks and evolutionary algorithms

    International Nuclear Information System (INIS)

    Atashkari, K.; Nariman-Zadeh, N.; Goelcue, M.; Khalkhali, A.; Jamali, A.

    2007-01-01

    The main reason for the efficiency decrease at part load conditions for four-stroke spark-ignition (SI) engines is the flow restriction at the cross-sectional area of the intake system. Traditionally, valve-timing has been designed to optimize operation at high engine-speed and wide open throttle conditions. Several investigations have demonstrated that improvements at part load conditions in engine performance can be accomplished if the valve-timing is variable. Controlling valve-timing can be used to improve the torque and power curve as well as to reduce fuel consumption and emissions. In this paper, a group method of data handling (GMDH) type neural network and evolutionary algorithms (EAs) are firstly used for modelling the effects of intake valve-timing (V t ) and engine speed (N) of a spark-ignition engine on both developed engine torque (T) and fuel consumption (Fc) using some experimentally obtained training and test data. Using such obtained polynomial neural network models, a multi-objective EA (non-dominated sorting genetic algorithm, NSGA-II) with a new diversity preserving mechanism are secondly used for Pareto based optimization of the variable valve-timing engine considering two conflicting objectives such as torque (T) and fuel consumption (Fc). The comparison results demonstrate the superiority of the GMDH type models over feedforward neural network models in terms of the statistical measures in the training data, testing data and the number of hidden neurons. Further, it is shown that some interesting and important relationships, as useful optimal design principles, involved in the performance of the variable valve-timing four-stroke spark-ignition engine can be discovered by the Pareto based multi-objective optimization of the polynomial models. Such important optimal principles would not have been obtained without the use of both the GMDH type neural network modelling and the multi-objective Pareto optimization approach

  14. Surgeon and type of anesthesia predict variability in surgical procedure times.

    Science.gov (United States)

    Strum, D P; Sampson, A R; May, J H; Vargas, L G

    2000-05-01

    Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated

  15. Model Checking Real-Time Systems

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  16. On the ""early-time"" evolution of variables relevant to turbulence models for the Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant variables before fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of mixing between two interpenetrating fluids to define the initial profiles for the turbulence model variables. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted profiles for the turbulence model variables and profiles of the variables obtained from low Atwood number three dimensional simulations show reasonable agreement.

  17. BAYESIAN TECHNIQUES FOR COMPARING TIME-DEPENDENT GRMHD SIMULATIONS TO VARIABLE EVENT HORIZON TELESCOPE OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios, E-mail: junhankim@email.arizona.edu [Department of Astronomy and Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States)

    2016-12-01

    The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.

  18. Long time scale hard X-ray variability in Seyfert 1 galaxies

    Science.gov (United States)

    Markowitz, Alex Gary

    This dissertation examines the relationship between long-term X-ray variability characteristics, black hole mass, and luminosity of Seyfert 1 Active Galactic Nuclei. High dynamic range power spectral density functions (PSDs) have been constructed for six Seyfert 1 galaxies. These PSDs show "breaks" or characteristic time scales, typically on the order of a few days. There is resemblance to PSDs of lower-mass Galactic X-ray binaries (XRBs), with the ratios of putative black hole masses and variability time scales approximately the same (106--7) between the two classes of objects. The data are consistent with a linear correlation between Seyfert PSD break time scale and black hole mass estimate; the relation extrapolates reasonably well over 6--7 orders of magnitude to XRBs. All of this strengthens the case for a physical similarity between Seyfert galaxies and XRBs. The first six years of RXTE monitoring of Seyfert 1s have been systematically analyzed to probe hard X-ray variability on multiple time scales in a total of 19 Seyfert is in an expansion of the survey of Markowitz & Edelson (2001). Correlations between variability amplitude, luminosity, and black hole mass are explored, the data support the model of PSD movement with black hole mass suggested by the PSD survey. All of the continuum variability results are consistent with relatively more massive black holes hosting larger X-ray emission regions, resulting in 'slower' observed variability. Nearly all sources in the sample exhibit stronger variability towards softer energies, consistent with softening as they brighten. Direct time-resolved spectral fitting has been performed on continuous RXTE monitoring of seven Seyfert is to study long-term spectral variability and Fe Kalpha variability characteristics. The Fe Kalpha line displays a wide range of behavior but varies less strongly than the broadband continuum. Overall, however, there is no strong evidence for correlated variability between the line and

  19. Modeling seasonality in bimonthly time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  20. Valuing travel time variability: Characteristics of the travel time distribution on an urban road

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Fukuda, Daisuke

    2012-01-01

    This paper provides a detailed empirical investigation of the distribution of travel times on an urban road for valuation of travel time variability. Our investigation is premised on the use of a theoretical model with a number of desirable properties. The definition of the value of travel time...... variability depends on certain properties of the distribution of random travel times that require empirical verification. Applying a range of nonparametric statistical techniques to data giving minute-by-minute travel times for a congested urban road over a period of five months, we show that the standardized...... travel time is roughly independent of the time of day as required by the theory. Except for the extreme right tail, a stable distribution seems to fit the data well. The travel time distributions on consecutive links seem to share a common stability parameter such that the travel time distribution...

  1. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  2. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  3. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  4. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  5. X-ray spectra and time variability of active galactic nuclei

    International Nuclear Information System (INIS)

    Mushotzky, R.F.

    1984-02-01

    The X-ray spectra of broad line active galactic nuclei (AGN) of all types (Seyfert I's, NELG's, broadline radio galaxies) are well fit by a power law in the .5 to 100 keV band of man energy slope alpha .68 + or - .15. There is, as yet, no strong evidence for time variability of this slope in a given object. The constraints that this places on simple models of the central energy source are discussed. BL Lac objects have quite different X-ray spectral properties and show pronounced X-ray spectral variability. On time scales longer than 12 hours most radio quiet AGN do not show strong, delta I/I .5, variability. The probability of variability of these AGN seems to be inversely related to their luminosity. However characteristics timescales for variability have not been measured for many objects. This general lack of variability may imply that most AGN are well below the Eddington limit. Radio bright AGN tend to be more variable than radio quiet AGN on long, tau approx 6 month, timescales

  6. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  7. Additive measures of travel time variability

    DEFF Research Database (Denmark)

    Engelson, Leonid; Fosgerau, Mogens

    2011-01-01

    This paper derives a measure of travel time variability for travellers equipped with scheduling preferences defined in terms of time-varying utility rates, and who choose departure time optimally. The corresponding value of travel time variability is a constant that depends only on preference...... parameters. The measure is unique in being additive with respect to independent parts of a trip. It has the variance of travel time as a special case. Extension is provided to the case of travellers who use a scheduled service with fixed headway....

  8. Preliminary Multi-Variable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

  9. Inverse Ising problem in continuous time: A latent variable approach

    Science.gov (United States)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  10. Modelling accuracy and variability of motor timing in treated and untreated Parkinson’s disease and healthy controls

    Directory of Open Access Journals (Sweden)

    Catherine Rhian Gwyn Jones

    2011-12-01

    Full Text Available Parkinson’s disease (PD is characterised by difficulty with the timing of movements. Data collected using the synchronization-continuation paradigm, an established motor timing paradigm, have produced varying results but with most studies finding impairment. Some of this inconsistency comes from variation in the medication state tested, in the inter-stimulus intervals (ISI selected, and in changeable focus on either the synchronization (tapping in time with a tone or continuation (maintaining the rhythm in the absence of the tone phase. We sought to re-visit the paradigm by testing across four groups of participants: healthy controls, medication naïve de novo PD patients, and treated PD patients both ‘on’ and ‘off’ dopaminergic medication. Four finger tapping intervals (ISI were used: 250ms, 500ms, 1000ms and 2000ms. Categorical predictors (group, ISI, and phase were used to predict accuracy and variability using a linear mixed model. Accuracy was defined as the relative error of a tap, and variability as the deviation of the participant’s tap from group predicted relative error. Our primary finding is that the treated PD group (PD patients ‘on’ and ‘off’ dopaminergic therapy showed a significantly different pattern of accuracy compared to the de novo group and the healthy controls at the 250ms interval. At this interval, the treated PD patients performed ‘ahead’ of the beat whilst the other groups performed ‘behind’ the beat. We speculate that this ‘hastening’ relates to the clinical phenomenon of motor festination. Across all groups, variability was smallest for both phases at the 500 ms interval, suggesting an innate preference for finger tapping within this range. Tapping variability for the two phases became increasingly divergent at the longer intervals, with worse performance in the continuation phase. The data suggest that patients with PD can be best discriminated from healthy controls on measures of

  11. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  12. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  13. Time series analysis of dengue incidence in Guadeloupe, French West Indies: Forecasting models using climate variables as predictors

    Directory of Open Access Journals (Sweden)

    Ruche Guy

    2011-06-01

    better than humidity and rainfall. SARIMA models using climatic data as independent variables could be easily incorporated into an early (3 months-ahead and reliably monitoring system of dengue outbreaks. This approach which is practicable for a surveillance system has public health implications in helping the prediction of dengue epidemic and therefore the timely appropriate and efficient implementation of prevention activities.

  14. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    Science.gov (United States)

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0

  15. Pure radiation in space-time models that admit integration of the eikonal equation by the separation of variables method

    Science.gov (United States)

    Osetrin, Evgeny; Osetrin, Konstantin

    2017-11-01

    We consider space-time models with pure radiation, which admit integration of the eikonal equation by the method of separation of variables. For all types of these models, the equations of the energy-momentum conservation law are integrated. The resulting form of metric, energy density, and wave vectors of radiation as functions of metric for all types of spaces under consideration is presented. The solutions obtained can be used for any metric theories of gravitation.

  16. Variable dead time counters: 2. A computer simulation

    International Nuclear Information System (INIS)

    Hooton, B.W.; Lees, E.W.

    1980-09-01

    A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)

  17. Time dependent analysis of assay comparability: a novel approach to understand intra- and inter-site variability over time

    Science.gov (United States)

    Winiwarter, Susanne; Middleton, Brian; Jones, Barry; Courtney, Paul; Lindmark, Bo; Page, Ken M.; Clark, Alan; Landqvist, Claire

    2015-09-01

    We demonstrate here a novel use of statistical tools to study intra- and inter-site assay variability of five early drug metabolism and pharmacokinetics in vitro assays over time. Firstly, a tool for process control is presented. It shows the overall assay variability but allows also the following of changes due to assay adjustments and can additionally highlight other, potentially unexpected variations. Secondly, we define the minimum discriminatory difference/ratio to support projects to understand how experimental values measured at different sites at a given time can be compared. Such discriminatory values are calculated for 3 month periods and followed over time for each assay. Again assay modifications, especially assay harmonization efforts, can be noted. Both the process control tool and the variability estimates are based on the results of control compounds tested every time an assay is run. Variability estimates for a limited set of project compounds were computed as well and found to be comparable. This analysis reinforces the need to consider assay variability in decision making, compound ranking and in silico modeling.

  18. Preference as a Function of Active Interresponse Times: A Test of the Active Time Model

    Science.gov (United States)

    Misak, Paul; Cleaveland, J. Mark

    2011-01-01

    In this article, we describe a test of the active time model for concurrent variable interval (VI) choice. The active time model (ATM) suggests that the time since the most recent response is one of the variables controlling choice in concurrent VI VI schedules of reinforcement. In our experiment, pigeons were trained in a multiple concurrent…

  19. A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

    Science.gov (United States)

    Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by

  20. Optimal variable-grid finite-difference modeling for porous media

    International Nuclear Information System (INIS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-01-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs. (paper)

  1. Chaos synchronization in time-delayed systems with parameter mismatches and variable delay times

    International Nuclear Information System (INIS)

    Shahverdiev, E.M.; Nuriev, R.A.; Hashimov, R.H.; Shore, K.A.

    2004-06-01

    We investigate synchronization between two undirectionally linearly coupled chaotic nonidentical time-delayed systems and show that parameter mismatches are of crucial importance to achieve synchronization. We establish that independent of the relation between the delay time in the coupled systems and the coupling delay time, only retarded synchronization with the coupling delay time is obtained. We show that with parameter mismatch or without it neither complete nor anticipating synchronization occurs. We derive existence and stability conditions for the retarded synchronization manifold. We demonstrate our approach using examples of the Ikeda and Mackey Glass models. Also for the first time we investigate chaos synchronization in time-delayed systems with variable delay time and find both existence and sufficient stability conditions for the retarded synchronization manifold with the coupling-delay lag time. (author)

  2. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  3. Spatial and temporal variability of interhemispheric transport times

    Science.gov (United States)

    Wu, Xiaokang; Yang, Huang; Waugh, Darryn W.; Orbe, Clara; Tilmes, Simone; Lamarque, Jean-Francois

    2018-05-01

    The seasonal and interannual variability of transport times from the northern midlatitude surface into the Southern Hemisphere is examined using simulations of three idealized age tracers: an ideal age tracer that yields the mean transit time from northern midlatitudes and two tracers with uniform 50- and 5-day decay. For all tracers the largest seasonal and interannual variability occurs near the surface within the tropics and is generally closely coupled to movement of the Intertropical Convergence Zone (ITCZ). There are, however, notable differences in variability between the different tracers. The largest seasonal and interannual variability in the mean age is generally confined to latitudes spanning the ITCZ, with very weak variability in the southern extratropics. In contrast, for tracers subject to spatially uniform exponential loss the peak variability tends to be south of the ITCZ, and there is a smaller contrast between tropical and extratropical variability. These differences in variability occur because the distribution of transit times from northern midlatitudes is very broad and tracers with more rapid loss are more sensitive to changes in fast transit times than the mean age tracer. These simulations suggest that the seasonal-interannual variability in the southern extratropics of trace gases with predominantly NH midlatitude sources may differ depending on the gases' chemical lifetimes.

  4. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    Science.gov (United States)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  5. Bayesian dynamic modeling of time series of dengue disease case counts.

    Science.gov (United States)

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful

  6. Space-time modeling of electricity spot prices

    DEFF Research Database (Denmark)

    Abate, Girum Dagnachew; Haldrup, Niels

    In this paper we derive a space-time model for electricity spot prices. A general spatial Durbin model that incorporates the temporal as well as spatial lags of spot prices is presented. Joint modeling of space-time effects is necessarily important when prices and loads are determined in a network...... in the spot price dynamics. Estimation of the spatial Durbin model show that the spatial lag variable is as important as the temporal lag variable in describing the spot price dynamics. We use the partial derivatives impact approach to decompose the price impacts into direct and indirect effects and we show...... that price effects transmit to neighboring markets and decline with distance. In order to examine the evolution of the spatial correlation over time, a time varying parameters spot price spatial Durbin model is estimated using recursive estimation. It is found that the spatial correlation within the Nord...

  7. A study of applying variable valve timing to highly rated diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Stone, C R; Leonard, H J [comps.; Brunel Univ., Uxbridge (United Kingdom); Charlton, S J [comp.; Bath Univ. (United Kingdom)

    1992-10-01

    The main objective of the research was to use Simulation Program for Internal Combustion Engines (SPICE) to quantify the potential offered by Variable Valve Timing (VVT) in improving engine performance. A model has been constructed of a particular engine using SPICE. The model has been validated with experimental data, and it has been shown that accurate predictions are made when the valve timing is changed. (author)

  8. Modeling category-level purchase timing with brand-level marketing variables

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard)

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from

  9. Correlates of adolescent sleep time and variability in sleep time: the role of individual and health related characteristics.

    Science.gov (United States)

    Moore, Melisa; Kirchner, H Lester; Drotar, Dennis; Johnson, Nathan; Rosen, Carol; Redline, Susan

    2011-03-01

    Adolescents are predisposed to short sleep duration and irregular sleep patterns due to certain host characteristics (e.g., age, pubertal status, gender, ethnicity, socioeconomic class, and neighborhood distress) and health-related variables (e.g., ADHD, asthma, birth weight, and BMI). The aim of the current study was to investigate the relationship between such variables and actigraphic measures of sleep duration and variability. Cross-sectional study of 247 adolescents (48.5% female, 54.3% ethnic minority, mean age of 13.7years) involved in a larger community-based cohort study. Significant univariate predictors of sleep duration included gender, minority ethnicity, neighborhood distress, parent income, and BMI. In multivariate models, gender, minority status, and BMI were significantly associated with sleep duration (all pminority adolescents, and those of a lower BMI obtaining more sleep. Univariate models demonstrated that age, minority ethnicity, neighborhood distress, parent education, parent income, pubertal status, and BMI were significantly related to variability in total sleep time. In the multivariate model, age, minority status, and BMI were significantly related to variability in total sleep time (all pminority adolescents, and those of a lower BMI obtaining more regular sleep. These data show differences in sleep patterns in population sub-groups of adolescents which may be important in understanding pediatric health risk profiles. Sub-groups that may particularly benefit from interventions aimed at improving sleep patterns include boys, overweight, and minority adolescents. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  11. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-07-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

  12. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  13. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Science.gov (United States)

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  14. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  15. Modeling temporal and spatial variability of traffic-related air pollution: Hourly land use regression models for black carbon

    Science.gov (United States)

    Dons, Evi; Van Poppel, Martine; Kochan, Bruno; Wets, Geert; Int Panis, Luc

    2013-08-01

    Land use regression (LUR) modeling is a statistical technique used to determine exposure to air pollutants in epidemiological studies. Time-activity diaries can be combined with LUR models, enabling detailed exposure estimation and limiting exposure misclassification, both in shorter and longer time lags. In this study, the traffic related air pollutant black carbon was measured with μ-aethalometers on a 5-min time base at 63 locations in Flanders, Belgium. The measurements show that hourly concentrations vary between different locations, but also over the day. Furthermore the diurnal pattern is different for street and background locations. This suggests that annual LUR models are not sufficient to capture all the variation. Hourly LUR models for black carbon are developed using different strategies: by means of dummy variables, with dynamic dependent variables and/or with dynamic and static independent variables. The LUR model with 48 dummies (weekday hours and weekend hours) performs not as good as the annual model (explained variance of 0.44 compared to 0.77 in the annual model). The dataset with hourly concentrations of black carbon can be used to recalibrate the annual model, resulting in many of the original explaining variables losing their statistical significance, and certain variables having the wrong direction of effect. Building new independent hourly models, with static or dynamic covariates, is proposed as the best solution to solve these issues. R2 values for hourly LUR models are mostly smaller than the R2 of the annual model, ranging from 0.07 to 0.8. Between 6 a.m. and 10 p.m. on weekdays the R2 approximates the annual model R2. Even though models of consecutive hours are developed independently, similar variables turn out to be significant. Using dynamic covariates instead of static covariates, i.e. hourly traffic intensities and hourly population densities, did not significantly improve the models' performance.

  16. Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model

    Science.gov (United States)

    Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho

    2016-06-01

    Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.

  17. Analysis models for variables associated with breastfeeding duration

    Directory of Open Access Journals (Sweden)

    Edson Theodoro dos S. Neto

    2013-09-01

    Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

  18. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  19. Preferences for travel time variability – A study of Danish car drivers

    DEFF Research Database (Denmark)

    Hjorth, Katrine; Rich, Jeppe

    Travel time variability (TTV) is a measure of the extent of unpredictability in travel times. It is generally accepted that TTV has a negative effect on travellers’ wellbeing and overall utility of travelling, and valuation of variability is an important issue in transport demand modelling...... preferences, to exclude non-traders, and to avoid complicated issues related to scheduled public transport services. The survey uses customised Internet questionnaires, containing a series of questions related to the traveller’s most recent morning trip to work, e.g.: • Travel time experienced on this day......, • Number of stops along the way, their duration, and whether these stops involved restrictions on time of day, • Restrictions regarding departure time from home or arrival time at work, • How often such a trip was made within the last month and the range of experienced travel times, • What the traveller...

  20. Smearing model and restoration of star image under conditions of variable angular velocity and long exposure time.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng; Wang, Xiaochu; Li, Bin

    2014-03-10

    The star tracker is one of the most promising attitude measurement devices widely used in spacecraft for its high accuracy. High dynamic performance is becoming its major restriction, and requires immediate focus and promotion. A star image restoration approach based on the motion degradation model of variable angular velocity is proposed in this paper. This method can overcome the problem of energy dispersion and signal to noise ratio (SNR) decrease resulting from the smearing of the star spot, thus preventing failed extraction and decreased star centroid accuracy. Simulations and laboratory experiments are conducted to verify the proposed methods. The restoration results demonstrate that the described method can recover the star spot from a long motion trail to the shape of Gaussian distribution under the conditions of variable angular velocity and long exposure time. The energy of the star spot can be concentrated to ensure high SNR and high position accuracy. These features are crucial to the subsequent star extraction and the whole performance of the star tracker.

  1. Modeling dynamic effects of promotion on interpurchase times

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractIn this paper we put forward a duration model to analyze the dynamic effects of marketing-mix variables on interpurchase times. We extend the accelerated failure-time model with an autoregressive structure. An important feature of our model is that it allows for different long-run and

  2. Prewhitening of hydroclimatic time series? Implications for inferred change and variability across time scales

    Science.gov (United States)

    Razavi, Saman; Vogel, Richard

    2018-02-01

    Prewhitening, the process of eliminating or reducing short-term stochastic persistence to enable detection of deterministic change, has been extensively applied to time series analysis of a range of geophysical variables. Despite the controversy around its utility, methodologies for prewhitening time series continue to be a critical feature of a variety of analyses including: trend detection of hydroclimatic variables and reconstruction of climate and/or hydrology through proxy records such as tree rings. With a focus on the latter, this paper presents a generalized approach to exploring the impact of a wide range of stochastic structures of short- and long-term persistence on the variability of hydroclimatic time series. Through this approach, we examine the impact of prewhitening on the inferred variability of time series across time scales. We document how a focus on prewhitened, residual time series can be misleading, as it can drastically distort (or remove) the structure of variability across time scales. Through examples with actual data, we show how such loss of information in prewhitened time series of tree rings (so-called "residual chronologies") can lead to the underestimation of extreme conditions in climate and hydrology, particularly droughts, reconstructed for centuries preceding the historical period.

  3. Physical attraction to reliable, low variability nervous systems: Reaction time variability predicts attractiveness.

    Science.gov (United States)

    Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard

    2017-01-01

    The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Confounding of three binary-variables counterfactual model

    OpenAIRE

    Liu, Jingwei; Hu, Shuang

    2011-01-01

    Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...

  5. Travel time variability and airport accessibility

    NARCIS (Netherlands)

    Koster, P.R.; Kroes, E.P.; Verhoef, E.T.

    2011-01-01

    We analyze the cost of access travel time variability for air travelers. Reliable access to airports is important since the cost of missing a flight is likely to be high. First, the determinants of the preferred arrival times at airports are analyzed. Second, the willingness to pay (WTP) for

  6. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. Modeling category-level purchase timing with brand-level marketing variables

    OpenAIRE

    Fok, D.; Paap, R.

    2003-01-01

    textabstractPurchase timing of households is usually modeled at the category level. Marketing efforts are however only available at the brand level. Hence, to describe category-level interpurchase times using marketing efforts one has to construct a category-level measure of marketing efforts from the marketing mix of individual brands. In this paper we discuss two standard approaches suggested in the literature to solve this problem, that is, using individual choice shares as weights to aver...

  8. Automatic classification of time-variable X-ray sources

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia)

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  9. Automatic classification of time-variable X-ray sources

    International Nuclear Information System (INIS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-01-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  10. Discrete-time BAM neural networks with variable delays

    Science.gov (United States)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  11. Discrete-time BAM neural networks with variable delays

    International Nuclear Information System (INIS)

    Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi

    2007-01-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development

  12. THE TIME DOMAIN SPECTROSCOPIC SURVEY: VARIABLE SELECTION AND ANTICIPATED RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, Eric; Green, Paul J. [Harvard Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Anderson, Scott F.; Ruan, John J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Eracleous, Michael; Brandt, William Nielsen [Department of Astronomy and Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802 (United States); Kelly, Brandon [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Badenes, Carlos [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara St, Pittsburgh, PA 15260 (United States); Bañados, Eduardo [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 (United States); Borissova, Jura [Instituto de Física y Astronomía, Universidad de Valparaíso, Av. Gran Bretaña 1111, Playa Ancha, Casilla 5030, and Millennium Institute of Astrophysics (MAS), Santiago (Chile); Burgett, William S. [GMTO Corp, Suite 300, 251 S. Lake Ave, Pasadena, CA 91101 (United States); Chambers, Kenneth, E-mail: emorganson@cfa.harvard.edu [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); and others

    2015-06-20

    We present the selection algorithm and anticipated results for the Time Domain Spectroscopic Survey (TDSS). TDSS is an Sloan Digital Sky Survey (SDSS)-IV Extended Baryon Oscillation Spectroscopic Survey (eBOSS) subproject that will provide initial identification spectra of approximately 220,000 luminosity-variable objects (variable stars and active galactic nuclei across 7500 deg{sup 2} selected from a combination of SDSS and multi-epoch Pan-STARRS1 photometry. TDSS will be the largest spectroscopic survey to explicitly target variable objects, avoiding pre-selection on the basis of colors or detailed modeling of specific variability characteristics. Kernel Density Estimate analysis of our target population performed on SDSS Stripe 82 data suggests our target sample will be 95% pure (meaning 95% of objects we select have genuine luminosity variability of a few magnitudes or more). Our final spectroscopic sample will contain roughly 135,000 quasars and 85,000 stellar variables, approximately 4000 of which will be RR Lyrae stars which may be used as outer Milky Way probes. The variability-selected quasar population has a smoother redshift distribution than a color-selected sample, and variability measurements similar to those we develop here may be used to make more uniform quasar samples in large surveys. The stellar variable targets are distributed fairly uniformly across color space, indicating that TDSS will obtain spectra for a wide variety of stellar variables including pulsating variables, stars with significant chromospheric activity, cataclysmic variables, and eclipsing binaries. TDSS will serve as a pathfinder mission to identify and characterize the multitude of variable objects that will be detected photometrically in even larger variability surveys such as Large Synoptic Survey Telescope.

  13. Changes in Southern Hemisphere circulation variability in climate change modelling experiments

    International Nuclear Information System (INIS)

    Grainger, Simon; Frederiksen, Carsten; Zheng, Xiaogu

    2007-01-01

    Full text: The seasonal mean of a climate variable can be considered as a statistical random variable, consisting of a signal and noise components (Madden 1976). The noise component consists of internal intraseasonal variability, and is not predictable on time-scales of a season or more ahead. The signal consists of slowly varying external and internal variability, and is potentially predictable on seasonal time-scales. The method of Zheng and Frederiksen (2004) has been applied to monthly time series of 500hPa Geopotential height from models submitted to the Coupled Model Intercomparison Project (CMIP3) experiment to obtain covariance matrices of the intraseasonal and slow components of covariability for summer and winter. The Empirical Orthogonal Functions (EOFs) of the intraseasonal and slow covariance matrices for the second half of the 20th century are compared with those observed by Frederiksen and Zheng (2007). The leading EOF in summer and winter for both the intraseasonal and slow components of covariability is the Southern Annular Mode (see, e.g. Kiladis and Mo 1998). This is generally reproduced by the CMIP3 models, although with different variance amounts. The observed secondary intraseasonal covariability modes of wave 4 patterns in summer and wave 3 or blocking in winter are also generally seen in the models, although the actual spatial pattern is different. For the slow covariabilty, the models are less successful in reproducing the two observed ENSO modes, with generally only one of them being represented among the leading EOFs. However, most models reproduce the observed South Pacific wave pattern. The intraseasonal and slow covariances matrices of 500hPa geopotential height under three climate change scenarios are also analysed and compared with those found for the second half of the 20th century. Through aggregating the results from a number of CMIP3 models, a consensus estimate of the changes in Southern Hemisphere variability, and their

  14. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  15. Natural climate variability in a coupled model

    International Nuclear Information System (INIS)

    Zebiak, S.E.; Cane, M.A.

    1990-01-01

    Multi-century simulations with a simplified coupled ocean-atmosphere model are described. These simulations reveal an impressive range of variability on decadal and longer time scales, in addition to the dominant interannual el Nino/Southern Oscillation signal that the model originally was designed to simulate. Based on a very large sample of century-long simulations, it is nonetheless possible to identify distinct model parameter sensitivities that are described here in terms of selected indices. Preliminary experiments motivated by general circulation model results for increasing greenhouse gases suggest a definite sensitivity to model global warming. While these results are not definitive, they strongly suggest that coupled air-sea dynamics figure prominently in global change and must be included in models for reliable predictions

  16. A spatio-temporal nonparametric Bayesian variable selection model of fMRI data for clustering correlated time courses.

    Science.gov (United States)

    Zhang, Linlin; Guindani, Michele; Versace, Francesco; Vannucci, Marina

    2014-07-15

    In this paper we present a novel wavelet-based Bayesian nonparametric regression model for the analysis of functional magnetic resonance imaging (fMRI) data. Our goal is to provide a joint analytical framework that allows to detect regions of the brain which exhibit neuronal activity in response to a stimulus and, simultaneously, infer the association, or clustering, of spatially remote voxels that exhibit fMRI time series with similar characteristics. We start by modeling the data with a hemodynamic response function (HRF) with a voxel-dependent shape parameter. We detect regions of the brain activated in response to a given stimulus by using mixture priors with a spike at zero on the coefficients of the regression model. We account for the complex spatial correlation structure of the brain by using a Markov random field (MRF) prior on the parameters guiding the selection of the activated voxels, therefore capturing correlation among nearby voxels. In order to infer association of the voxel time courses, we assume correlated errors, in particular long memory, and exploit the whitening properties of discrete wavelet transforms. Furthermore, we achieve clustering of the voxels by imposing a Dirichlet process (DP) prior on the parameters of the long memory process. For inference, we use Markov Chain Monte Carlo (MCMC) sampling techniques that combine Metropolis-Hastings schemes employed in Bayesian variable selection with sampling algorithms for nonparametric DP models. We explore the performance of the proposed model on simulated data, with both block- and event-related design, and on real fMRI data. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    Science.gov (United States)

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  18. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  19. Nonlinear dynamic modeling of a simple flexible rotor system subjected to time-variable base motions

    Science.gov (United States)

    Chen, Liqiang; Wang, Jianjun; Han, Qinkai; Chu, Fulei

    2017-09-01

    Rotor systems carried in transportation system or under seismic excitations are considered to have a moving base. To study the dynamic behavior of flexible rotor systems subjected to time-variable base motions, a general model is developed based on finite element method and Lagrange's equation. Two groups of Euler angles are defined to describe the rotation of the rotor with respect to the base and that of the base with respect to the ground. It is found that the base rotations would cause nonlinearities in the model. To verify the proposed model, a novel test rig which could simulate the base angular-movement is designed. Dynamic experiments on a flexible rotor-bearing system with base angular motions are carried out. Based upon these, numerical simulations are conducted to further study the dynamic response of the flexible rotor under harmonic angular base motions. The effects of base angular amplitude, rotating speed and base frequency on response behaviors are discussed by means of FFT, waterfall, frequency response curve and orbits of the rotor. The FFT and waterfall plots of the disk horizontal and vertical vibrations are marked with multiplications of the base frequency and sum and difference tones of the rotating frequency and the base frequency. Their amplitudes will increase remarkably when they meet the whirling frequencies of the rotor system.

  20. Interannual modes of variability of Southern Hemisphere atmospheric circulation in CMIP3 models

    International Nuclear Information System (INIS)

    Grainger, S; Frederiksen, C S; Zheng, X

    2010-01-01

    The atmospheric circulation acts as a bridge between large-scale sources of climate variability, and climate variability on regional scales. Here a statistical method is applied to monthly mean Southern Hemisphere 500hPa geopotential height to separate the interannual variability of the seasonal mean into intraseasonal and slowly varying (time scales of a season or longer) components. Intraseasonal and slow modes of variability are estimated from realisations of models from the Coupled Model Intercomparison Project Phase 3 (CMIP3) twentieth century coupled climate simulation (20c3m) and are evaluated against those estimated from reanalysis data. The intraseasonal modes of variability are generally well reproduced across all CMIP3 20c3m models for both Southern Hemisphere summer and winter. The slow modes are in general less well reproduced than the intraseasonal modes, and there are larger differences between realisations than for the intraseasonal modes. New diagnostics are proposed to evaluate model variability. It is found that differences between realisations from each model are generally less than inter-model differences. Differences between model-mean diagnostics are found. The results obtained are applicable to assessing the reliability of changes in atmospheric circulation variability in CMIP3 models and for their suitability for further studies of regional climate variability.

  1. Resting heart rate variability is associated with ex-Gaussian metrics of intra-individual reaction time variability.

    Science.gov (United States)

    Spangler, Derek P; Williams, DeWayne P; Speller, Lassiter F; Brooks, Justin R; Thayer, Julian F

    2018-03-01

    The relationships between vagally mediated heart rate variability (vmHRV) and the cognitive mechanisms underlying performance can be elucidated with ex-Gaussian modeling-an approach that quantifies two different forms of intra-individual variability (IIV) in reaction time (RT). To this end, the current study examined relations of resting vmHRV to whole-distribution and ex-Gaussian IIV. Subjects (N = 83) completed a 5-minute baseline while vmHRV (root mean square of successive differences; RMSSD) was measured. Ex-Gaussian (sigma, tau) and whole-distribution (standard deviation) estimates of IIV were derived from reaction times on a Stroop task. Resting vmHRV was found to be inversely related to tau (exponential IIV) but not to sigma (Gaussian IIV) or the whole-distribution standard deviation of RTs. Findings suggest that individuals with high vmHRV can better prevent attentional lapses but not difficulties with motor control. These findings inform the differential relationships of cardiac vagal control to the cognitive processes underlying human performance. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Analysis of time variable gravity data over Africa

    International Nuclear Information System (INIS)

    Barletta, Valentina R.; Aoudia, Abdelkarim

    2010-01-01

    phenomena, climate and even to human activities. The time analysis approach proposed in this work is used to discriminate signals from different possible geographical sources mostly in the low latitude regions, where hydrology is strongest, as for example Africa. By applying our approach to hydrological models we can show similarities and differences with GRACE data. The similarities lie almost all in the geographical signatures of the periodic component even if there are differences in amplitudes and phase. While differences in the non-periodic component can be explained with the presence of other phenomena, the differences in the periodic component can be explained with missing groundwater or other defects in the hydrological models. In this way we tested the adequacy of some hydrological models commonly combined with gravity data to retrieve solid Earth geophysical signatures. Thus we show that analysis of time variable gravity data over Africa strongly requires a correct and reliable hydrological model. (author)

  3. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    Science.gov (United States)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  4. A fire management simulation model using stochastic arrival times

    Science.gov (United States)

    Eric L. Smith

    1987-01-01

    Fire management simulation models are used to predict the impact of changes in the fire management program on fire outcomes. As with all models, the goal is to abstract reality without seriously distorting relationships between variables of interest. One important variable of fire organization performance is the length of time it takes to get suppression units to the...

  5. High-resolution regional climate model evaluation using variable-resolution CESM over California

    Science.gov (United States)

    Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.

    2015-12-01

    Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine

  6. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy

    Directory of Open Access Journals (Sweden)

    Augusto C. M. Souza

    2018-04-01

    Full Text Available While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae, at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  7. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy.

    Science.gov (United States)

    Souza, Augusto C M; Mousaviraad, Mohammad; Mapoka, Kenneth O M; Rosentrater, Kurt A

    2018-04-24

    While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae , at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  8. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

  9. The added value of time-variable microgravimetry to the understanding of how volcanoes work

    Science.gov (United States)

    Carbone, Daniele; Poland, Michael; Greco, Filippo; Diament, Michel

    2017-01-01

    During the past few decades, time-variable volcano gravimetry has shown great potential for imaging subsurface processes at active volcanoes (including some processes that might otherwise remain “hidden”), especially when combined with other methods (e.g., ground deformation, seismicity, and gas emissions). By supplying information on changes in the distribution of bulk mass over time, gravimetry can provide information regarding processes such as magma accumulation in void space, gas segregation at shallow depths, and mechanisms driving volcanic uplift and subsidence. Despite its potential, time-variable volcano gravimetry is an underexploited method, not widely adopted by volcano researchers or observatories. The cost of instrumentation and the difficulty in using it under harsh environmental conditions is a significant impediment to the exploitation of gravimetry at many volcanoes. In addition, retrieving useful information from gravity changes in noisy volcanic environments is a major challenge. While these difficulties are not trivial, neither are they insurmountable; indeed, creative efforts in a variety of volcanic settings highlight the value of time-variable gravimetry for understanding hazards as well as revealing fundamental insights into how volcanoes work. Building on previous work, we provide a comprehensive review of time-variable volcano gravimetry, including discussions of instrumentation, modeling and analysis techniques, and case studies that emphasize what can be learned from campaign, continuous, and hybrid gravity observations. We are hopeful that this exploration of time-variable volcano gravimetry will excite more scientists about the potential of the method, spurring further application, development, and innovation.

  10. Statistical variability comparison in MODIS and AERONET derived aerosol optical depth over Indo-Gangetic Plains using time series modeling.

    Science.gov (United States)

    Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant

    2016-05-15

    A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; Heijden, van der M.C.

    1996-01-01

    In many multi-echelon inventory systems the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process which generates such lead times is usually not

  12. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; van der Heijden, M.C.

    1997-01-01

    In many multi-echelon inventory systems, the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process that generates such lead times is usually not well

  13. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Brandon C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Becker, Andrew C. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Sobolewska, Malgosia [Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716, Warsaw (Poland); Siemiginowska, Aneta [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Uttley, Phil [Astronomical Institute Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands)

    2014-06-10

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  14. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia; Siemiginowska, Aneta; Uttley, Phil

    2014-01-01

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  15. Excitation of Earth Rotation Variations "Observed" by Time-Variable Gravity

    Science.gov (United States)

    Chao, Ben F.; Cox, C. M.

    2005-01-01

    Time variable gravity measurements have been made over the past two decades using the space geodetic technique of satellite laser ranging, and more recently by the GRACE satellite mission with improved spatial resolutions. The degree-2 harmonic components of the time-variable gravity contain important information about the Earth s length-of-day and polar motion excitation functions, in a way independent to the traditional "direct" Earth rotation measurements made by, for example, the very-long-baseline interferometry and GPS. In particular, the (degree=2, order= 1) components give the mass term of the polar motion excitation; the (2,O) component, under certain mass conservation conditions, gives the mass term of the length-of-day excitation. Combining these with yet another independent source of angular momentum estimation calculated from global geophysical fluid models (for example the atmospheric angular momentum, in both mass and motion terms), in principle can lead to new insights into the dynamics, particularly the role or the lack thereof of the cores, in the excitation processes of the Earth rotation variations.

  16. Time variable cosmological constants from the age of universe

    International Nuclear Information System (INIS)

    Xu Lixin; Lu Jianbo; Li Wenbo

    2010-01-01

    In this Letter, time variable cosmological constant, dubbed age cosmological constant, is investigated motivated by the fact: any cosmological length scale and time scale can introduce a cosmological constant or vacuum energy density into Einstein's theory. The age cosmological constant takes the form ρ Λ =3c 2 M P 2 /t Λ 2 , where t Λ is the age or conformal age of our universe. The effective equation of state (EoS) of age cosmological constant are w Λ eff =-1+2/3 (√(Ω Λ ))/c and w Λ eff =-1+2/3 (√(Ω Λ ))/c (1+z) when the age and conformal age of universe are taken as the role of cosmological time scales respectively. The EoS are the same as the so-called agegraphic dark energy models. However, the evolution histories are different from the agegraphic ones for their different evolution equations.

  17. The use of ZIP and CART to model cryptosporidiosis in relation to climatic variables.

    Science.gov (United States)

    Hu, Wenbiao; Mengersen, Kerrie; Fu, Shiu-Yun; Tong, Shilu

    2010-07-01

    This research assesses the potential impact of weekly weather variability on the incidence of cryptosporidiosis disease using time series zero-inflated Poisson (ZIP) and classification and regression tree (CART) models. Data on weather variables, notified cryptosporidiosis cases and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Both time series ZIP and CART models show a clear association between weather variables (maximum temperature, relative humidity, rainfall and wind speed) and cryptosporidiosis disease. The time series CART models indicated that, when weekly maximum temperature exceeded 31 degrees C and relative humidity was less than 63%, the relative risk of cryptosporidiosis rose by 13.64 (expected morbidity: 39.4; 95% confidence interval: 30.9-47.9). These findings may have applications as a decision support tool in planning disease control and risk-management programs for cryptosporidiosis disease.

  18. Enhanced Requirements for Assessment in a Competency-Based, Time-Variable Medical Education System.

    Science.gov (United States)

    Gruppen, Larry D; Ten Cate, Olle; Lingard, Lorelei A; Teunissen, Pim W; Kogan, Jennifer R

    2018-03-01

    Competency-based, time-variable medical education has reshaped the perceptions and practices of teachers, curriculum designers, faculty developers, clinician educators, and program administrators. This increasingly popular approach highlights the fact that learning among different individuals varies in duration, foundation, and goal. Time variability places particular demands on the assessment data that are so necessary for making decisions about learner progress. These decisions may be formative (e.g., feedback for improvement) or summative (e.g., decisions about advancing a student). This article identifies challenges to collecting assessment data and to making assessment decisions in a time-variable system. These challenges include managing assessment data, defining and making valid assessment decisions, innovating in assessment, and modeling the considerable complexity of assessment in real-world settings and richly interconnected social systems. There are hopeful signs of creativity in assessment both from researchers and practitioners, but the transition from a traditional to a competency-based medical education system will likely continue to create much controversy and offer opportunities for originality and innovation in assessment.

  19. On the ""early-time"" evolution of variables relevant to turbulence models for Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant parameters before the fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of the mixing between two interpenetrating fluids to define the initial profiles for the turbulence model parameters. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted initial profiles for the turbulence model parameters and initial profiles of the parameters obtained from low Atwood number three dimensional simulations show reasonable agreement.

  20. Travel time variability and airport accessibility

    OpenAIRE

    Koster, P.R.; Kroes, E.P.; Verhoef, E.T.

    2010-01-01

    This discussion paper resulted in a publication in Transportation Research Part B: Methodological (2011). Vol. 45(10), pages 1545-1559. This paper analyses the cost of access travel time variability for air travelers. Reliable access to airports is important since it is likely that the cost of missing a flight is high. First, the determinants of the preferred arrival times at airports are analyzed, including trip purpose, type of airport, flight characteristics, travel experience, type of che...

  1. Variability of Travel Times on New Jersey Highways

    Science.gov (United States)

    2011-06-01

    This report presents the results of a link and path travel time study conducted on selected New Jersey (NJ) highways to produce estimates of the corresponding variability of travel time (VTT) by departure time of the day and days of the week. The tra...

  2. Variability of gastric emptying time using standardized radiolabeled meals

    International Nuclear Information System (INIS)

    Christian, P.E.; Brophy, C.M.; Egger, M.J.; Taylor, A.; Moore, J.G.

    1984-01-01

    To define the range of inter- and intra-subject variability on gastric emptying measurements, eight healthy male subjects (ages 19-40) received meals on four separate occasions. The meal consisted of 150 g of beef stew labeled with Tc-99m SC labeled liver (600 μCi) and 150 g of orange juice containing In-111 DTPA (100 μCi) as the solid- and liquid-phase markers respectively. Images of the solid and liquid phases were obtained at 20 min intervals immediately after meal ingestion. The stomach region was selected from digital images and data were corrected for radionuclide interference, radioactive decay and the geometric mean of anterior and posterior counts. More absolute variability was seen with the solid than the liquid marker emptying for the group. The mean solid half-emptying time was 58 +- 17 min (range 29-92) while the mean liquid half-emptying time was 24 +- 8 min (range 12-37). A nested random effects analysis of variance showed moderate intra-subject variability for solid half-emptying times (rho = 0.4594), and high intra-subject variability was implied by a low correlation (rho = 0.2084) for liquid half-emptying. The average inter-subject differences were 58.3% of the total variance for solids (rho = 0.0017). For liquids, the inter-subject variability was 69.1% of the total variance, but was only suggestive of statistical significance (rho = 0.0666). The normal half emptying time for gastric emptying of liquids and solids is a variable phenomenon in healthy subjects and has great inter- and intra-individual day-to-day differences

  3. Variability of gastric emptying time using standardized radiolabeled meals

    Energy Technology Data Exchange (ETDEWEB)

    Christian, P.E.; Brophy, C.M.; Egger, M.J.; Taylor, A.; Moore, J.G.

    1984-01-01

    To define the range of inter- and intra-subject variability on gastric emptying measurements, eight healthy male subjects (ages 19-40) received meals on four separate occasions. The meal consisted of 150 g of beef stew labeled with Tc-99m SC labeled liver (600 ..mu..Ci) and 150 g of orange juice containing In-111 DTPA (100 ..mu..Ci) as the solid- and liquid-phase markers respectively. Images of the solid and liquid phases were obtained at 20 min intervals immediately after meal ingestion. The stomach region was selected from digital images and data were corrected for radionuclide interference, radioactive decay and the geometric mean of anterior and posterior counts. More absolute variability was seen with the solid than the liquid marker emptying for the group. The mean solid half-emptying time was 58 +- 17 min (range 29-92) while the mean liquid half-emptying time was 24 +- 8 min (range 12-37). A nested random effects analysis of variance showed moderate intra-subject variability for solid half-emptying times (rho = 0.4594), and high intra-subject variability was implied by a low correlation (rho = 0.2084) for liquid half-emptying. The average inter-subject differences were 58.3% of the total variance for solids (rho = 0.0017). For liquids, the inter-subject variability was 69.1% of the total variance, but was only suggestive of statistical significance (rho = 0.0666). The normal half emptying time for gastric emptying of liquids and solids is a variable phenomenon in healthy subjects and has great inter- and intra-individual day-to-day differences.

  4. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  5. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  6. Variability in reaction time performance of younger and older adults.

    Science.gov (United States)

    Hultsch, David F; MacDonald, Stuart W S; Dixon, Roger A

    2002-03-01

    Age differences in three basic types of variability were examined: variability between persons (diversity), variability within persons across tasks (dispersion), and variability within persons across time (inconsistency). Measures of variability were based on latency performance from four measures of reaction time (RT) performed by a total of 99 younger adults (ages 17--36 years) and 763 older adults (ages 54--94 years). Results indicated that all three types of variability were greater in older compared with younger participants even when group differences in speed were statistically controlled. Quantile-quantile plots showed age and task differences in the shape of the inconsistency distributions. Measures of within-person variability (dispersion and inconsistency) were positively correlated. Individual differences in RT inconsistency correlated negatively with level of performance on measures of perceptual speed, working memory, episodic memory, and crystallized abilities. Partial set correlation analyses indicated that inconsistency predicted cognitive performance independent of level of performance. The results indicate that variability of performance is an important indicator of cognitive functioning and aging.

  7. Rose bush leaf and internode expansion dynamics: analysis and development of a model capturing interplant variability

    Directory of Open Access Journals (Sweden)

    Sabine eDemotes-Mainard

    2013-10-01

    Full Text Available Bush rose architecture, among other factors, such as plant health, determines plant visual quality. The commercial product is the individual plant and interplant variability may be high within a crop. Thus, both mean plant architecture and interplant variability should be studied. Expansion is an important feature of architecture, but it has been little studied at the level of individual organs in bush roses. We investigated the expansion kinetics of primary shoot organs, to develop a model reproducing the organ expansion of real crops from non destructive input variables. We took interplant variability in expansion kinetics and the model’s ability to simulate this variability into account. Changes in leaflet and internode dimensions over thermal time were recorded for primary shoot expansion, on 83 plants from three crops grown in different climatic conditions and densities. An empirical model was developed, to reproduce organ expansion kinetics for individual plants of a real crop of bush rose primary shoots. Leaflet or internode length was simulated as a logistic function of thermal time. The model was evaluated by cross-validation. We found that differences in leaflet or internode expansion kinetics between phytomer positions and between plants at a given phytomer position were due mostly to large differences in time of organ expansion and expansion rate, rather than differences in expansion duration. Thus, in the model, the parameters linked to expansion duration were predicted by values common to all plants, whereas variability in final size and organ expansion time was captured by input data. The model accurately simulated leaflet and internode expansion for individual plants (RMSEP = 7.3% and 10.2% of final length, respectively. Thus, this study defines the measurements required to simulate expansion and provides the first model simulating organ expansion in rosebush to capture interplant variability.

  8. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  9. Can time be a discrete dynamical variable

    International Nuclear Information System (INIS)

    Lee, T.D.

    1983-01-01

    The possibility that time can be regarded as a discrete dynamical variable is examined through all phases of mechanics: from classical mechanics to nonrelativistic quantum mechanics, and to relativistic quantum field theories. (orig.)

  10. Identifying Variability in Mental Models Within and Between Disciplines Caring for the Cardiac Surgical Patient.

    Science.gov (United States)

    Brown, Evans K H; Harder, Kathleen A; Apostolidou, Ioanna; Wahr, Joyce A; Shook, Douglas C; Farivar, R Saeid; Perry, Tjorvi E; Konia, Mojca R

    2017-07-01

    The cardiac operating room is a complex environment requiring efficient and effective communication between multiple disciplines. The objectives of this study were to identify and rank critical time points during the perioperative care of cardiac surgical patients, and to assess variability in responses, as a correlate of a shared mental model, regarding the importance of these time points between and within disciplines. Using Delphi technique methodology, panelists from 3 institutions were tasked with developing a list of critical time points, which were subsequently assigned to pause point (PP) categories. Panelists then rated these PPs on a 100-point visual analog scale. Descriptive statistics were expressed as percentages, medians, and interquartile ranges (IQRs). We defined low response variability between panelists as an IQR ≤ 20, moderate response variability as an IQR > 20 and ≤ 40, and high response variability as an IQR > 40. Panelists identified a total of 12 PPs. The PPs identified by the highest number of panelists were (1) before surgical incision, (2) before aortic cannulation, (3) before cardiopulmonary bypass (CPB) initiation, (4) before CPB separation, and (5) at time of transfer of care from operating room (OR) to intensive care unit (ICU) staff. There was low variability among panelists' ratings of the PP "before surgical incision," moderate response variability for the PPs "before separation from CPB," "before transfer from OR table to bed," and "at time of transfer of care from OR to ICU staff," and high response variability for the remaining 8 PPs. In addition, the perceived importance of each of these PPs varies between disciplines and between institutions. Cardiac surgical providers recognize distinct critical time points during cardiac surgery. However, there is a high degree of variability within and between disciplines as to the importance of these times, suggesting an absence of a shared mental model among disciplines caring for

  11. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    Science.gov (United States)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal

  12. Sources and Impacts of Modeled and Observed Low-Frequency Climate Variability

    Science.gov (United States)

    Parsons, Luke Alexander

    Here we analyze climate variability using instrumental, paleoclimate (proxy), and the latest climate model data to understand more about the sources and impacts of low-frequency climate variability. Understanding the drivers of climate variability at interannual to century timescales is important for studies of climate change, including analyses of detection and attribution of climate change impacts. Additionally, correctly modeling the sources and impacts of variability is key to the simulation of abrupt change (Alley et al., 2003) and extended drought (Seager et al., 2005; Pelletier and Turcotte, 1997; Ault et al., 2014). In Appendix A, we employ an Earth system model (GFDL-ESM2M) simulation to study the impacts of a weakening of the Atlantic meridional overturning circulation (AMOC) on the climate of the American Tropics. The AMOC drives some degree of local and global internal low-frequency climate variability (Manabe and Stouffer, 1995; Thornalley et al., 2009) and helps control the position of the tropical rainfall belt (Zhang and Delworth, 2005). We find that a major weakening of the AMOC can cause large-scale temperature, precipitation, and carbon storage changes in Central and South America. Our results suggest that possible future changes in AMOC strength alone will not be sufficient to drive a large-scale dieback of the Amazonian forest, but this key natural ecosystem is sensitive to dry-season length and timing of rainfall (Parsons et al., 2014). In Appendix B, we compare a paleoclimate record of precipitation variability in the Peruvian Amazon to climate model precipitation variability. The paleoclimate (Lake Limon) record indicates that precipitation variability in western Amazonia is 'red' (i.e., increasing variability with timescale). By contrast, most state-of-the-art climate models indicate precipitation variability in this region is nearly 'white' (i.e., equally variability across timescales). This paleo-model disagreement in the overall

  13. Handbook of latent variable and related models

    CERN Document Server

    Lee, Sik-Yum

    2011-01-01

    This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

  14. Uniform Estimate of the Finite-Time Ruin Probability for All Times in a Generalized Compound Renewal Risk Model

    Directory of Open Access Journals (Sweden)

    Qingwu Gao

    2012-01-01

    Full Text Available We discuss the uniformly asymptotic estimate of the finite-time ruin probability for all times in a generalized compound renewal risk model, where the interarrival times of successive accidents and all the claim sizes caused by an accident are two sequences of random variables following a wide dependence structure. This wide dependence structure allows random variables to be either negatively dependent or positively dependent.

  15. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    Science.gov (United States)

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  16. Increased timing variability in schizophrenia and bipolar disorder.

    Directory of Open Access Journals (Sweden)

    Amanda R Bolbecker

    Full Text Available Theoretical and empirical evidence suggests that impaired time perception and the neural circuitry underlying internal timing mechanisms may contribute to severe psychiatric disorders, including psychotic and mood disorders. The degree to which alterations in temporal perceptions reflect deficits that exist across psychosis-related phenotypes and the extent to which mood symptoms contribute to these deficits is currently unknown. In addition, compared to schizophrenia, where timing deficits have been more extensively investigated, sub-second timing has been studied relatively infrequently in bipolar disorder. The present study compared sub-second duration estimates of schizophrenia (SZ, schizoaffective disorder (SA, non-psychotic bipolar disorder (BDNP, bipolar disorder with psychotic features (BDP, and healthy non-psychiatric controls (HC on a well-established time perception task using sub-second durations. Participants included 66 SZ, 37 BDNP, 34 BDP, 31 SA, and 73 HC who participated in a temporal bisection task that required temporal judgements about auditory durations ranging from 300 to 600 milliseconds. Timing variability was significantly higher in SZ, BDP, and BDNP groups compared to healthy controls. The bisection point did not differ across groups. These findings suggest that both psychotic and mood symptoms may be associated with disruptions in internal timing mechanisms. Yet unexpected findings emerged. Specifically, the BDNP group had significantly increased variability compared to controls, but the SA group did not. In addition, these deficits appeared to exist independent of current symptom status. The absence of between group differences in bisection point suggests that increased variability in the SZ and bipolar disorder groups are due to alterations in perceptual timing in the sub-second range, possibly mediated by the cerebellum, rather than cognitive deficits.

  17. Space and time evolution of two nonlinearly coupled variables

    International Nuclear Information System (INIS)

    Obayashi, H.; Totsuji, H.; Wilhelmsson, H.

    1976-12-01

    The system of two coupled linear differential equations are studied assuming that the coupling terms are proportional to the product of the dependent variables, representing e.g. intensities or populations. It is furthermore assumed that these variables experience different linear dissipation or growth. The derivations account for space as well as time dependence of the variables. It is found that certain particular solutions can be obtained to this system, whereas a full solution in space and time as an initial value problem is outside the scope of the present paper. The system has a nonlinear equilibrium solution for which the nonlinear coupling terms balance the terms of linear dissipation. The case of space and time evolution of a small perturbation of the nonlinear equilibrium state, given the initial one-dimensional spatial distribution of the perturbation, is also considered in some detail. (auth.)

  18. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  19. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  20. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  1. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  2. On Extending Temporal Models in Timed Influence Networks

    Science.gov (United States)

    2009-06-01

    among variables in a system. A situation where the impact of a variable takes some time to reach the affected variable(s) cannot be modeled by either of...A1 A4 [h11(1) = 0.99, h11(0) = -0.99] [h12(1) = 0.90, h12 (0) = 0] [ h13 (1) = 0, h13 (0) = -0.90] [h14(1) =- 0.90, h14(0...the corresponding )( 1 11 xh and )( 2 12 xh . The posterior probability of B captures the impact of an affecting event on B and can be plotted as a

  3. Time-dependent inhomogeneous jet models for BL Lac objects

    Science.gov (United States)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-05-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  4. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  5. Is Reaction Time Variability in ADHD Mainly at Low Frequencies?

    Science.gov (United States)

    Karalunas, Sarah L.; Huang-Pollock, Cynthia L.; Nigg, Joel T.

    2013-01-01

    Background: Intraindividual variability in reaction times (RT variability) has garnered increasing interest as an indicator of cognitive and neurobiological dysfunction in children with attention deficit hyperactivity disorder (ADHD). Recent theory and research has emphasized specific low-frequency patterns of RT variability. However, whether…

  6. Universe before Planck time: A quantum gravity model

    International Nuclear Information System (INIS)

    Padmanabhan, T.

    1983-01-01

    A model for quantum gravity can be constructed by treating the conformal degree of freedom of spacetime as a quantum variable. An isotropic, homogeneous cosmological solution in this quantum gravity model is presented. The spacetime is nonsingular for all the three possible values of three-space curvature, and agrees with the classical solution for time scales larger than the Planck time scale. A possibility of quantum fluctuations creating the matter in the universe is suggested

  7. Towards a More Biologically-meaningful Climate Characterization: Variability in Space and Time at Multiple Scales

    Science.gov (United States)

    Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.

    2013-12-01

    fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.

  8. Environmental versus demographic variability in stochastic predator–prey models

    International Nuclear Information System (INIS)

    Dobramysl, U; Täuber, U C

    2013-01-01

    In contrast to the neutral population cycles of the deterministic mean-field Lotka–Volterra rate equations, including spatial structure and stochastic noise in models for predator–prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Our previous study showed that population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization. (paper)

  9. Viscous dark energy models with variable G and Λ

    International Nuclear Information System (INIS)

    Arbab, Arbab I.

    2008-01-01

    We consider a cosmological model with bulk viscosity η and variable cosmological A ∝ ρ -α , alpha = const and gravitational G constants. The model exhibits many interesting cosmological features. Inflation proceeds due to the presence of bulk viscosity and dark energy without requiring the equation of state p=-ρ. During the inflationary era the energy density ρ does not remain constant, as in the de-Sitter type. Moreover, the cosmological and gravitational constants increase exponentially with time, whereas the energy density and viscosity decrease exponentially with time. The rate of mass creation during inflation is found to be very huge suggesting that all matter in the universe is created during inflation. (author)

  10. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

  11. Performance of a Predictive Model for Calculating Ascent Time to a Target Temperature

    Directory of Open Access Journals (Sweden)

    Jin Woo Moon

    2016-12-01

    Full Text Available The aim of this study was to develop an artificial neural network (ANN prediction model for controlling building heating systems. This model was used to calculate the ascent time of indoor temperature from the setback period (when a building was not occupied to a target setpoint temperature (when a building was occupied. The calculated ascent time was applied to determine the proper moment to start increasing the temperature from the setback temperature to reach the target temperature at an appropriate time. Three major steps were conducted: (1 model development; (2 model optimization; and (3 performance evaluation. Two software programs—Matrix Laboratory (MATLAB and Transient Systems Simulation (TRNSYS—were used for model development, performance tests, and numerical simulation methods. Correlation analysis between input variables and the output variable of the ANN model revealed that two input variables (current indoor air temperature and temperature difference from the target setpoint temperature, presented relatively strong relationships with the ascent time to the target setpoint temperature. These two variables were used as input neurons. Analyzing the difference between the simulated and predicted values from the ANN model provided the optimal number of hidden neurons (9, hidden layers (3, moment (0.9, and learning rate (0.9. At the study’s conclusion, the optimized model proved its prediction accuracy with acceptable errors.

  12. How to get rid of W: a latent variables approach to modelling spatially lagged variables

    NARCIS (Netherlands)

    Folmer, H.; Oud, J.

    2008-01-01

    In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

  13. How to get rid of W : a latent variables approach to modelling spatially lagged variables

    NARCIS (Netherlands)

    Folmer, Henk; Oud, Johan

    2008-01-01

    In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

  14. Competency-Based, Time-Variable Education in the Health Professions: Crossroads.

    Science.gov (United States)

    Lucey, Catherine R; Thibault, George E; Ten Cate, Olle

    2018-03-01

    Health care systems around the world are transforming to align with the needs of 21st-century patients and populations. Transformation must also occur in the educational systems that prepare the health professionals who deliver care, advance discovery, and educate the next generation of physicians in these evolving systems. Competency-based, time-variable education, a comprehensive educational strategy guided by the roles and responsibilities that health professionals must assume to meet the needs of contemporary patients and communities, has the potential to catalyze optimization of educational and health care delivery systems. By designing educational and assessment programs that require learners to meet specific competencies before transitioning between the stages of formal education and into practice, this framework assures the public that every physician is capable of providing high-quality care. By engaging learners as partners in assessment, competency-based, time-variable education prepares graduates for careers as lifelong learners. While the medical education community has embraced the notion of competencies as a guiding framework for educational institutions, the structure and conduct of formal educational programs remain more aligned with a time-based, competency-variable paradigm.The authors outline the rationale behind this recommended shift to a competency-based, time-variable education system. They then introduce the other articles included in this supplement to Academic Medicine, which summarize the history of, theories behind, examples demonstrating, and challenges associated with competency-based, time-variable education in the health professions.

  15. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  16. Predicting travel time variability for cost-benefit analysis

    NARCIS (Netherlands)

    Peer, S.; Koopmans, C.; Verhoef, E.T.

    2010-01-01

    Unreliable travel times cause substantial costs to travelers. Nevertheless, they are not taken into account in many cost-benefit-analyses (CBA), or only in very rough ways. This paper aims at providing simple rules on how variability can be predicted, based on travel time data from Dutch highways.

  17. Predictor Variables for Marathon Race Time in Recreational Female Runners

    OpenAIRE

    Schmid, Wiebke; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Purpose We intended to determine predictor variables of anthropometry and training for marathon race time in recreational female runners in order to predict marathon race time for future novice female runners. Methods Anthropometric characteristics such as body mass, body height, body mass index, circumferences of limbs, thicknesses of skin-folds and body fat as well as training variables such as volume and speed in running training were related to marathon race time using bi- and multi-varia...

  18. Dissociable effects of practice variability on learning motor and timing skills.

    Science.gov (United States)

    Caramiaux, Baptiste; Bevilacqua, Frédéric; Wanderley, Marcelo M; Palmer, Caroline

    2018-01-01

    Motor skill acquisition inherently depends on the way one practices the motor task. The amount of motor task variability during practice has been shown to foster transfer of the learned skill to other similar motor tasks. In addition, variability in a learning schedule, in which a task and its variations are interweaved during practice, has been shown to help the transfer of learning in motor skill acquisition. However, there is little evidence on how motor task variations and variability schedules during practice act on the acquisition of complex motor skills such as music performance, in which a performer learns both the right movements (motor skill) and the right time to perform them (timing skill). This study investigated the impact of rate (tempo) variability and the schedule of tempo change during practice on timing and motor skill acquisition. Complete novices, with no musical training, practiced a simple musical sequence on a piano keyboard at different rates. Each novice was assigned to one of four learning conditions designed to manipulate the amount of tempo variability across trials (large or small tempo set) and the schedule of tempo change (randomized or non-randomized order) during practice. At test, the novices performed the same musical sequence at a familiar tempo and at novel tempi (testing tempo transfer), as well as two novel (but related) sequences at a familiar tempo (testing spatial transfer). We found that practice conditions had little effect on learning and transfer performance of timing skill. Interestingly, practice conditions influenced motor skill learning (reduction of movement variability): lower temporal variability during practice facilitated transfer to new tempi and new sequences; non-randomized learning schedule improved transfer to new tempi and new sequences. Tempo (rate) and the sequence difficulty (spatial manipulation) affected performance variability in both timing and movement. These findings suggest that there is a

  19. Predictor variables for marathon race time in recreational female runners.

    Science.gov (United States)

    Schmid, Wiebke; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-06-01

    We intended to determine predictor variables of anthropometry and training for marathon race time in recreational female runners in order to predict marathon race time for future novice female runners. Anthropometric characteristics such as body mass, body height, body mass index, circumferences of limbs, thicknesses of skin-folds and body fat as well as training variables such as volume and speed in running training were related to marathon race time using bi- and multi-variate analysis in 29 female runners. The marathoners completed the marathon distance within 251 (26) min, running at a speed of 10.2 (1.1) km/h. Body mass (r=0.37), body mass index (r=0.46), the circumferences of thigh (r=0.51) and calf (r=0.41), the skin-fold thicknesses of front thigh (r=0.38) and of medial calf (r=0.40), the sum of eight skin-folds (r=0.44) and body fat percentage (r=0.41) were related to marathon race time. For the variables of training, maximal distance ran per week (r=- 0.38), number of running training sessions per week (r=- 0.46) and the speed of the training sessions (r= - 0.60) were related to marathon race time. In the multi-variate analysis, the circumference of calf (P=0.02) and the speed of the training sessions (P=0.0014) were related to marathon race time. Marathon race time might be partially (r(2)=0.50) predicted by the following equation: Race time (min)=184.4 + 5.0 x (circumference calf, cm) -11.9 x (speed in running during training, km/h) for recreational female marathoners. Variables of both anthropometry and training were related to marathon race time in recreational female marathoners and cannot be reduced to one single predictor variable. For practical applications, a low circumference of calf and a high running speed in training are associated with a fast marathon race time in recreational female runners.

  20. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    Science.gov (United States)

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  1. Public transport travel time and its variability

    OpenAIRE

    Mazloumi Shomali, Ehsan

    2017-01-01

    Executive Summary Public transport agencies around the world are constantly trying to improve the performance of their service, and to provide passengers with a more reliable service. Two major measures to evaluate the performance of a transit system include travel time and travel time variability. Information on these two measures provides operators with a capacity to identify the problematic locations in a transport system and improve operating plans. Likewise, users can benefit through...

  2. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    Science.gov (United States)

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  3. A stochastic surplus production model in continuous time

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte

    2017-01-01

    surplus production model in continuous time (SPiCT), which in addition to stock dynamics also models the dynamics of the fisheries. This enables error in the catch process to be reflected in the uncertainty of estimated model parameters and management quantities. Benefits of the continuous-time state......Surplus production modelling has a long history as a method for managing data-limited fish stocks. Recent advancements have cast surplus production models as state-space models that separate random variability of stock dynamics from error in observed indices of biomass. We present a stochastic......-space model formulation include the ability to provide estimates of exploitable biomass and fishing mortality at any point in time from data sampled at arbitrary and possibly irregular intervals. We show in a simulation that the ability to analyse subannual data can increase the effective sample size...

  4. Earth System Data Records of Mass Transport from Time-Variable Gravity Data

    Science.gov (United States)

    Zlotnicki, V.; Talpe, M.; Nerem, R. S.; Landerer, F. W.; Watkins, M. M.

    2014-12-01

    Satellite measurements of time variable gravity have revolutionized the study of Earth, by measuring the ice losses of Greenland, Antarctica and land glaciers, changes in groundwater including unsustainable losses due to extraction of groundwater, the mass and currents of the oceans and their redistribution during El Niño events, among other findings. Satellite measurements of gravity have been made primarily by four techniques: satellite tracking from land stations using either lasers or Doppler radio systems, satellite positioning by GNSS/GPS, satellite to satellite tracking over distances of a few hundred km using microwaves, and through a gravity gradiometer (radar altimeters also measure the gravity field, but over the oceans only). We discuss the challenges in the measurement of gravity by different instruments, especially time-variable gravity. A special concern is how to bridge a possible gap in time between the end of life of the current GRACE satellite pair, launched in 2002, and a future GRACE Follow-On pair to be launched in 2017. One challenge in combining data from different measurement systems consists of their different spatial and temporal resolutions and the different ways in which they alias short time scale signals. Typically satellite measurements of gravity are expressed in spherical harmonic coefficients (although expansions in terms of 'mascons', the masses of small spherical caps, has certain advantages). Taking advantage of correlations among spherical harmonic coefficients described by empirical orthogonal functions and derived from GRACE data it is possible to localize the otherwise coarse spatial resolution of the laser and Doppler derived gravity models. This presentation discusses the issues facing a climate data record of time variable mass flux using these different data sources, including its validation.

  5. Change in intraindividual variability over time as a key metric for defining performance-based cognitive fatigability.

    Science.gov (United States)

    Wang, Chao; Ding, Mingzhou; Kluger, Benzi M

    2014-03-01

    Cognitive fatigability is conventionally quantified as the increase over time in either mean reaction time (RT) or error rate from two or more time periods during sustained performance of a prolonged cognitive task. There is evidence indicating that these mean performance measures may not sufficiently reflect the response characteristics of cognitive fatigue. We hypothesized that changes in intraindividual variability over time would be a more sensitive and ecologically meaningful metric for investigations of fatigability of cognitive performance. To test the hypothesis fifteen young adults were recruited. Trait fatigue perceptions in various domains were assessed with the Multidimensional Fatigue Index (MFI). Behavioral data were then recorded during performance of a three-hour continuous cued Stroop task. Results showed that intraindividual variability, as quantified by the coefficient of variation of RT, increased linearly over the course of three hours and demonstrated a significantly greater effect size than mean RT or accuracy. Change in intraindividual RT variability over time was significantly correlated with relevant subscores of the MFI including reduced activity, reduced motivation and mental fatigue. While change in mean RT over time was also correlated with reduced motivation and mental fatigue, these correlations were significantly smaller than those associated with intraindividual RT variability. RT distribution analysis using an ex-Gaussian model further revealed that change in intraindividual variability over time reflects an increase in the exponential component of variance and may reflect attentional lapses or other breakdowns in cognitive control. These results suggest that intraindividual variability and its change over time provide important metrics for measuring cognitive fatigability and may prove useful for inferring the underlying neuronal mechanisms of both perceptions of fatigue and objective changes in performance. Copyright © 2014

  6. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

  7. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers. 

  8. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  9. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the

  10. Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control

    Science.gov (United States)

    Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo

    2017-02-01

    The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.

  11. On the explaining-away phenomenon in multivariate latent variable models.

    Science.gov (United States)

    van Rijn, Peter; Rijmen, Frank

    2015-02-01

    Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.

  12. Spatio-temporal Variability of Albedo and its Impact on Glacier Melt Modelling

    Science.gov (United States)

    Kinnard, C.; Mendoza, C.; Abermann, J.; Petlicki, M.; MacDonell, S.; Urrutia, R.

    2017-12-01

    Albedo is an important variable for the surface energy balance of glaciers, yet its representation within distributed glacier mass-balance models is often greatly simplified. Here we study the spatio-temporal evolution of albedo on Glacier Universidad, central Chile (34°S, 70°W), using time-lapse terrestrial photography, and investigate its effect on the shortwave radiation balance and modelled melt rates. A 12 megapixel digital single-lens reflex camera was setup overlooking the glacier and programmed to take three daily images of the glacier during a two-year period (2012-2014). One image was chosen for each day with no cloud shading on the glacier. The RAW images were projected onto a 10m resolution digital elevation model (DEM), using the IMGRAFT software (Messerli and Grinsted, 2015). A six-parameter camera model was calibrated using a single image and a set of 17 ground control points (GCPs), yielding a georeferencing accuracy of accounting for possible camera movement over time. The reflectance values from the projected image were corrected for topographic and atmospheric influences using a parametric solar irradiation model, following a modified algorithm based on Corripio (2004), and then converted to albedo using reference albedo measurements from an on-glacier automatic weather station (AWS). The image-based albedo was found to compare well with independent albedo observations from a second AWS in the glacier accumulation area. Analysis of the albedo maps showed that the albedo is more spatially-variable than the incoming solar radiation, making albedo a more important factor of energy balance spatial variability. The incorporation of albedo maps within an enhanced temperature index melt model revealed that the spatio-temporal variability of albedo is an important factor for the calculation of glacier-wide meltwater fluxes.

  13. Modeling and Understanding Time-Evolving Scenarios

    Directory of Open Access Journals (Sweden)

    Riccardo Melen

    2015-08-01

    Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.

  14. [Hypothesis on the equilibrium point and variability of amplitude, speed and time of single-joint movement].

    Science.gov (United States)

    Latash, M; Gottleib, G

    1990-01-01

    Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.

  15. Dissecting Time- from Tumor-Related Gene Expression Variability in Bilateral Breast Cancer

    Directory of Open Access Journals (Sweden)

    Maurizio Callari

    2018-01-01

    Full Text Available Metachronous (MBC and synchronous bilateral breast tumors (SBC are mostly distinct primaries, whereas paired primaries and their local recurrences (LRC share a common origin. Intra-pair gene expression variability in MBC, SBC, and LRC derives from time/tumor microenvironment-related and tumor genetic background-related factors and pairs represents an ideal model for trying to dissect tumor-related from microenvironment-related variability. Pairs of tumors derived from women with SBC (n = 18, MBC (n = 11, and LRC (n = 10 undergoing local-regional treatment were profiled for gene expression; similarity between pairs was measured using an intraclass correlation coefficient (ICC computed for each gene and compared using analysis of variance (ANOVA. When considering biologically unselected genes, the highest correlations were found for primaries and paired LRC, and the lowest for MBC pairs. By instead limiting the analysis to the breast cancer intrinsic genes, correlations between primaries and paired LRC were enhanced, while lower similarities were observed for SBC and MBC. Focusing on stromal-related genes, the ICC values decreased for MBC and were significantly different from SBC. These findings indicate that it is possible to dissect intra-pair gene expression variability into components that are associated with genetic origin or with time and microenvironment by using specific gene subsets.

  16. RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.

    Science.gov (United States)

    Stránský, V; Thinová, L

    2017-11-01

    In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Preliminary time-phased TWRS process model results

    International Nuclear Information System (INIS)

    Orme, R.M.

    1995-01-01

    This report documents the first phase of efforts to model the retrieval and processing of Hanford tank waste within the constraints of an assumed tank farm configuration. This time-phased approach simulates a first try at a retrieval sequence, the batching of waste through retrieval facilities, the batching of retrieved waste through enhanced sludge washing, the batching of liquids through pretreatment and low-level waste (LLW) vitrification, and the batching of pretreated solids through high-level waste (HLW) vitrification. The results reflect the outcome of an assumed retrieval sequence that has not been tailored with respect to accepted measures of performance. The batch data, composition variability, and final waste volume projects in this report should be regarded as tentative. Nevertheless, the results provide interesting insights into time-phased processing of the tank waste. Inspection of the composition variability, for example, suggests modifications to the retrieval sequence that will further improve the uniformity of feed to the vitrification facilities. This model will be a valuable tool for evaluating suggested retrieval sequences and establishing a time-phased processing baseline. An official recommendation on tank retrieval sequence will be made in September, 1995

  18. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    International Nuclear Information System (INIS)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs

  19. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs.

  20. Modeling intensive longitudinal data with mixtures of nonparametric trajectories and time-varying effects.

    Science.gov (United States)

    Dziak, John J; Li, Runze; Tan, Xianming; Shiffman, Saul; Shiyko, Mariya P

    2015-12-01

    Behavioral scientists increasingly collect intensive longitudinal data (ILD), in which phenomena are measured at high frequency and in real time. In many such studies, it is of interest to describe the pattern of change over time in important variables as well as the changing nature of the relationship between variables. Individuals' trajectories on variables of interest may be far from linear, and the predictive relationship between variables of interest and related covariates may also change over time in a nonlinear way. Time-varying effect models (TVEMs; see Tan, Shiyko, Li, Li, & Dierker, 2012) address these needs by allowing regression coefficients to be smooth, nonlinear functions of time rather than constants. However, it is possible that not only observed covariates but also unknown, latent variables may be related to the outcome. That is, regression coefficients may change over time and also vary for different kinds of individuals. Therefore, we describe a finite mixture version of TVEM for situations in which the population is heterogeneous and in which a single trajectory would conceal important, interindividual differences. This extended approach, MixTVEM, combines finite mixture modeling with non- or semiparametric regression modeling, to describe a complex pattern of change over time for distinct latent classes of individuals. The usefulness of the method is demonstrated in an empirical example from a smoking cessation study. We provide a versatile SAS macro and R function for fitting MixTVEMs. (c) 2015 APA, all rights reserved).

  1. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.

    Science.gov (United States)

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.

  2. Adaptation of endothelial cells to physiologically-modeled, variable shear stress.

    Directory of Open Access Journals (Sweden)

    Joseph S Uzarski

    Full Text Available Endothelial cell (EC function is mediated by variable hemodynamic shear stress patterns at the vascular wall, where complex shear stress profiles directly correlate with blood flow conditions that vary temporally based on metabolic demand. The interactions of these more complex and variable shear fields with EC have not been represented in hemodynamic flow models. We hypothesized that EC exposed to pulsatile shear stress that changes in magnitude and duration, modeled directly from real-time physiological variations in heart rate, would elicit phenotypic changes as relevant to their critical roles in thrombosis, hemostasis, and inflammation. Here we designed a physiological flow (PF model based on short-term temporal changes in blood flow observed in vivo and compared it to static culture and steady flow (SF at a fixed pulse frequency of 1.3 Hz. Results show significant changes in gene regulation as a function of temporally variable flow, indicating a reduced wound phenotype more representative of quiescence. EC cultured under PF exhibited significantly higher endothelial nitric oxide synthase (eNOS activity (PF: 176.0±11.9 nmol/10(5 EC; SF: 115.0±12.5 nmol/10(5 EC, p = 0.002 and lower TNF-a-induced HL-60 leukocyte adhesion (PF: 37±6 HL-60 cells/mm(2; SF: 111±18 HL-60/mm(2, p = 0.003 than cells cultured under SF which is consistent with a more quiescent anti-inflammatory and anti-thrombotic phenotype. In vitro models have become increasingly adept at mimicking natural physiology and in doing so have clarified the importance of both chemical and physical cues that drive cell function. These data illustrate that the variability in metabolic demand and subsequent changes in perfusion resulting in constantly variable shear stress plays a key role in EC function that has not previously been described.

  3. Efficient conservative ADER schemes based on WENO reconstruction and space-time predictor in primitive variables

    Science.gov (United States)

    Zanotti, Olindo; Dumbser, Michael

    2016-01-01

    We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER

  4. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  5. Study of solar radiation prediction and modeling of relationships between solar radiation and meteorological variables

    International Nuclear Information System (INIS)

    Sun, Huaiwei; Zhao, Na; Zeng, Xiaofan; Yan, Dong

    2015-01-01

    Highlights: • We investigate relationships between solar radiation and meteorological variables. • A strong relationship exists between solar radiation and sunshine duration. • Daily global radiation can be estimated accurately with ARMAX–GARCH models. • MGARCH model was applied to investigate time-varying relationships. - Abstract: The traditional approaches that employ the correlations between solar radiation and other measured meteorological variables are commonly utilized in studies. It is important to investigate the time-varying relationships between meteorological variables and solar radiation to determine which variables have the strongest correlations with solar radiation. In this study, the nonlinear autoregressive moving average with exogenous variable–generalized autoregressive conditional heteroscedasticity (ARMAX–GARCH) and multivariate GARCH (MGARCH) time-series approaches were applied to investigate the associations between solar radiation and several meteorological variables. For these investigations, the long-term daily global solar radiation series measured at three stations from January 1, 2004 until December 31, 2007 were used in this study. Stronger relationships were observed to exist between global solar radiation and sunshine duration than between solar radiation and temperature difference. The results show that 82–88% of the temporal variations of the global solar radiation were captured by the sunshine-duration-based ARMAX–GARCH models and 55–68% of daily variations were captured by the temperature-difference-based ARMAX–GARCH models. The advantages of the ARMAX–GARCH models were also confirmed by comparison of Auto-Regressive and Moving Average (ARMA) and neutral network (ANN) models in the estimation of daily global solar radiation. The strong heteroscedastic persistency of the global solar radiation series was revealed by the AutoRegressive Conditional Heteroscedasticity (ARCH) and Generalized Auto

  6. On the intra-seasonal variability within the extratropics in a general circulation model and observational data

    International Nuclear Information System (INIS)

    May, W.; Bengtsson, L.

    1994-01-01

    There are various phenomena on different spatial and temporal scales contributing to the intra-seasonal variability within the extratropics. One may notice higher-frequency baroclinic disturbances affecting the day-to-day variability of the atmosphere. But one finds also low-frequency fluctuations on a typical time scale of a few weeks. Blocking anticyclones are probably the most prominent example of such features. These fluctuations on different scales, however, are influencing each other, in particular the temporal evolution and spatial distribution. There has been observational work on various phenomena contributing to the intra-seasonal variability for a long time. In the last decade or so, however, with the increasing importance of General Circulation Models there have been some studies dealing with the intra-seasonal variability as simulated by these models

  7. Multi-wheat-model ensemble responses to interannual climatic variability

    DEFF Research Database (Denmark)

    Ruane, A C; Hudson, N I; Asseng, S

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and ......-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.......We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and we...... evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...

  8. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  9. Short time-scale optical variability properties of the largest AGN sample observed with Kepler/K2

    Science.gov (United States)

    Aranzana, E.; Körding, E.; Uttley, P.; Scaringi, S.; Bloemen, S.

    2018-05-01

    We present the first short time-scale (˜hours to days) optical variability study of a large sample of active galactic nuclei (AGNs) observed with the Kepler/K2 mission. The sample contains 252 AGN observed over four campaigns with ˜30 min cadence selected from the Million Quasar Catalogue with R magnitude <19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of our sample. A variety of power-law slopes were found indicating that there is not a universal slope for all AGNs. We find that the rest-frame amplitude variability in the frequency range of 6 × 10-6-10-4 Hz varies from 1to10 per cent with an average of 1.7 per cent. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift. We attribute this effect to the known `bluer when brighter' variability of quasars combined with the fixed bandpass of Kepler data. This study also enables us to distinguish between Seyferts and blazars and confirm AGN candidates. For our study, we have compared results obtained from light curves extracted using different aperture sizes and with and without detrending. We find that limited detrending of the optimal photometric precision light curve is the best approach, although some systematic effects still remain present.

  10. Numerical Solution of the Time-Dependent Navier–Stokes Equation for Variable Density–Variable Viscosity. Part I

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Xin, H.; Neytcheva, M.

    2015-01-01

    Roč. 20, č. 2 (2015), s. 232-260 ISSN 1392-6292 Institutional support: RVO:68145535 Keywords : variable density * phase-field model * Navier-Stokes equations * preconditioning * variable viscosity Subject RIV: BA - General Mathematics Impact factor: 0.468, year: 2015 http://www.tandfonline.com/doi/abs/10.3846/13926292.2015.1021395

  11. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  12. Derivation and application of mathematical model for well test analysis with variable skin factor in hydrocarbon reservoirs

    Directory of Open Access Journals (Sweden)

    Pengcheng Liu

    2016-06-01

    Full Text Available Skin factor is often regarded as a constant in most of the mathematical model for well test analysis in oilfields, but this is only a kind of simplified treatment with the actual skin factor changeable. This paper defined the average permeability of a damaged area as a function of time by using the definition of skin factor. Therefore a relationship between a variable skin factor and time was established. The variable skin factor derived was introduced into existing traditional models rather than using a constant skin factor, then, this newly derived mathematical model for well test analysis considering variable skin factor was solved by Laplace transform. The dimensionless wellbore pressure and its derivative changed with dimensionless time were plotted with double logarithm and these plots can be used for type curve fitting. The effects of all the parameters in the expression of variable skin factor were analyzed based on the dimensionless wellbore pressure and its derivative. Finally, actual well testing data were used to fit the type curves developed which validates the applicability of the mathematical model from Sheng-2 Block, Shengli Oilfield, China.

  13. Ecological prediction with nonlinear multivariate time-frequency functional data models

    Science.gov (United States)

    Yang, Wen-Hsi; Wikle, Christopher K.; Holan, Scott H.; Wildhaber, Mark L.

    2013-01-01

    Time-frequency analysis has become a fundamental component of many scientific inquiries. Due to improvements in technology, the amount of high-frequency signals that are collected for ecological and other scientific processes is increasing at a dramatic rate. In order to facilitate the use of these data in ecological prediction, we introduce a class of nonlinear multivariate time-frequency functional models that can identify important features of each signal as well as the interaction of signals corresponding to the response variable of interest. Our methodology is of independent interest and utilizes stochastic search variable selection to improve model selection and performs model averaging to enhance prediction. We illustrate the effectiveness of our approach through simulation and by application to predicting spawning success of shovelnose sturgeon in the Lower Missouri River.

  14. Investigation of clinical pharmacokinetic variability of an opioid antagonist through physiologically based absorption modeling.

    Science.gov (United States)

    Ding, Xuan; He, Minxia; Kulkarni, Rajesh; Patel, Nita; Zhang, Xiaoyu

    2013-08-01

    Identifying the source of inter- and/or intrasubject variability in pharmacokinetics (PK) provides fundamental information in understanding the pharmacokinetics-pharmacodynamics relationship of a drug and project its efficacy and safety in clinical populations. This identification process can be challenging given that a large number of potential causes could lead to PK variability. Here we present an integrated approach of physiologically based absorption modeling to investigate the root cause of unexpectedly high PK variability of a Phase I clinical trial drug. LY2196044 exhibited high intersubject variability in the absorption phase of plasma concentration-time profiles in humans. This could not be explained by in vitro measurements of drug properties and excellent bioavailability with low variability observed in preclinical species. GastroPlus™ modeling suggested that the compound's optimal solubility and permeability characteristics would enable rapid and complete absorption in preclinical species and in humans. However, simulations of human plasma concentration-time profiles indicated that despite sufficient solubility and rapid dissolution of LY2196044 in humans, permeability and/or transit in the gastrointestinal (GI) tract may have been negatively affected. It was concluded that clinical PK variability was potentially due to the drug's antagonism on opioid receptors that affected its transit and absorption in the GI tract. Copyright © 2013 Wiley Periodicals, Inc.

  15. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    Science.gov (United States)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  16. Are revised models better models? A skill score assessment of regional interannual variability

    Science.gov (United States)

    Sperber, Kenneth R.; Participating AMIP Modelling Groups

    1999-05-01

    Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.

  17. A moving mesh method with variable relaxation time

    OpenAIRE

    Soheili, Ali Reza; Stockie, John M.

    2006-01-01

    We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...

  18. A Variable Stiffness Analysis Model for Large Complex Thin-Walled Guide Rail

    Directory of Open Access Journals (Sweden)

    Wang Xiaolong

    2016-01-01

    Full Text Available Large complex thin-walled guide rail has complicated structure and no uniform low rigidity. The traditional cutting simulations are time consuming due to huge computation especially in large workpiece. To solve these problems, a more efficient variable stiffness analysis model has been propose, which can obtain quantitative stiffness value of the machining surface. Applying simulate cutting force in sampling points using finite element analysis software ABAQUS, the single direction variable stiffness rule can be obtained. The variable stiffness matrix has been propose by analyzing multi-directions coupling variable stiffness rule. Combining with the three direction cutting force value, the reasonability of existing processing parameters can be verified and the optimized cutting parameters can be designed.

  19. An agent-based model of cellular dynamics and circadian variability in human endotoxemia.

    Directory of Open Access Journals (Sweden)

    Tung T Nguyen

    Full Text Available As cellular variability and circadian rhythmicity play critical roles in immune and inflammatory responses, we present in this study an agent-based model of human endotoxemia to examine the interplay between circadian controls, cellular variability and stochastic dynamics of inflammatory cytokines. The model is qualitatively validated by its ability to reproduce circadian dynamics of inflammatory mediators and critical inflammatory responses after endotoxin administration in vivo. Novel computational concepts are proposed to characterize the cellular variability and synchronization of inflammatory cytokines in a population of heterogeneous leukocytes. Our results suggest that there is a decrease in cell-to-cell variability of inflammatory cytokines while their synchronization is increased after endotoxin challenge. Model parameters that are responsible for IκB production stimulated by NFκB activation and for the production of anti-inflammatory cytokines have large impacts on system behaviors. Additionally, examining time-dependent systemic responses revealed that the system is least vulnerable to endotoxin in the early morning and most vulnerable around midnight. Although much remains to be explored, proposed computational concepts and the model we have pioneered will provide important insights for future investigations and extensions, especially for single-cell studies to discover how cellular variability contributes to clinical implications.

  20. Variable Neighbourhood Search and Mathematical Programming for Just-in-Time Job-Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Sunxin Wang

    2014-01-01

    Full Text Available This paper presents a combination of variable neighbourhood search and mathematical programming to minimize the sum of earliness and tardiness penalty costs of all operations for just-in-time job-shop scheduling problem (JITJSSP. Unlike classical E/T scheduling problem with each job having its earliness or tardiness penalty cost, each operation in this paper has its earliness and tardiness penalties, which are paid if the operation is completed before or after its due date. Our hybrid algorithm combines (i a variable neighbourhood search procedure to explore the huge feasible solution spaces efficiently by alternating the swap and insertion neighbourhood structures and (ii a mathematical programming model to optimize the completion times of the operations for a given solution in each iteration procedure. Additionally, a threshold accepting mechanism is proposed to diversify the local search of variable neighbourhood search. Computational results on the 72 benchmark instances show that our algorithm can obtain the best known solution for 40 problems, and the best known solutions for 33 problems are updated.

  1. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

    2015-01-01

    models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

  2. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  3. An Atmospheric Variability Model for Venus Aerobraking Missions

    Science.gov (United States)

    Tolson, Robert T.; Prince, Jill L. H.; Konopliv, Alexander A.

    2013-01-01

    Aerobraking has proven to be an enabling technology for planetary missions to Mars and has been proposed to enable low cost missions to Venus. Aerobraking saves a significant amount of propulsion fuel mass by exploiting atmospheric drag to reduce the eccentricity of the initial orbit. The solar arrays have been used as the primary drag surface and only minor modifications have been made in the vehicle design to accommodate the relatively modest aerothermal loads. However, if atmospheric density is highly variable from orbit to orbit, the mission must either accept higher aerothermal risk, a slower pace for aerobraking, or a tighter corridor likely with increased propulsive cost. Hence, knowledge of atmospheric variability is of great interest for the design of aerobraking missions. The first planetary aerobraking was at Venus during the Magellan mission. After the primary Magellan science mission was completed, aerobraking was used to provide a more circular orbit to enhance gravity field recovery. Magellan aerobraking took place between local solar times of 1100 and 1800 hrs, and it was found that the Venusian atmospheric density during the aerobraking phase had less than 10% 1 sigma orbit to orbit variability. On the other hand, at some latitudes and seasons, Martian variability can be as high as 40% 1 sigmaFrom both the MGN and PVO mission it was known that the atmosphere, above aerobraking altitudes, showed greater variability at night, but this variability was never quantified in a systematic manner. This paper proposes a model for atmospheric variability that can be used for aerobraking mission design until more complete data sets become available.

  4. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012

    Science.gov (United States)

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682

  5. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  6. Unified models of interactions with gauge-invariant variables

    International Nuclear Information System (INIS)

    Zet, Gheorghe

    2000-01-01

    A model of gauge theory is formulated in terms of gauge-invariant variables over a 4-dimensional space-time. Namely, we define a metric tensor g μν ( μ , ν = 0,1,2,3) starting with the components F μν a and F μν a tilde of the tensor associated to the Yang-Mills fields and its dual: g μν = 1/(3Δ 1/3 ) (ε abc F μα a F αβ b tilde F βν c ). Here Δ is a scale factor which can be chosen of a convenient form so that the theory may be self-dual or not. The components g μν are interpreted as new gauge-invariant variables. The model is applied to the case when the gauge group is SU(2). For the space-time we choose two different manifolds: (i) the space-time is R x S 3 , where R is the real line and S 3 is the three-dimensional sphere; (ii) the space-time is endowed with axial symmetry. We calculate the components g μν of the new metric for the two cases in terms of SU(2) gauge potentials. Imposing the supplementary condition that the new metric coincides with the initial metric of the space-time, we obtain the field equations (of the first order in derivatives) for the gauge fields. In addition, we determine the scale factor Δ which is introduced in the definition of g μν to ensure the property of self-duality for our SU(2) gauge theory, namely, 1/(2√g)(ε αβστ g μα g νβ F στ a = F μν a , g = det (g μν ). In the case (i) we show that the space-time R x S 3 is not compatible with a self-dual SU(2) gauge theory, but in the case (ii) the condition of self-duality is satisfied. The model developed in our work can be considered as a possible way to unification of general relativity and Yang-Mills theories. This means that the gauge theory can be formulated in the close analogy with the general relativity, i.e. the Yang-Mills equations are equivalent to Einstein equations with the right-hand side of a simple form. (authors)

  7. A double hit model for the distribution of time to AIDS onset

    Science.gov (United States)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  8. Predictor variables for a half marathon race time in recreational male runners.

    Science.gov (United States)

    Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Lepers, Romuald; Rosemann, Thomas

    2011-01-01

    The aim of this study was to investigate predictor variables of anthropometry, training, and previous experience in order to predict a half marathon race time for future novice recreational male half marathoners. Eighty-four male finishers in the 'Half Marathon Basel' completed the race distance within (mean and standard deviation, SD) 103.9 (16.5) min, running at a speed of 12.7 (1.9) km/h. After multivariate analysis of the anthropometric characteristics, body mass index (r = 0.56), suprailiacal (r = 0.36) and medial calf skin fold (r = 0.53) were related to race time. For the variables of training and previous experience, speed in running of the training sessions (r = -0.54) were associated with race time. After multivariate analysis of both the significant anthropometric and training variables, body mass index (P = 0.0150) and speed in running during training (P = 0.0045) were related to race time. Race time in a half marathon might be partially predicted by the following equation (r(2) = 0.44): Race time (min) = 72.91 + 3.045 * (body mass index, kg/m(2)) -3.884 * (speed in running during training, km/h) for recreational male runners. To conclude, variables of both anthropometry and training were related to half marathon race time in recreational male half marathoners and cannot be reduced to one single predictor variable.

  9. Online Synthesis for Operation Execution Time Variability on Digital Microfluidic Biochips

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul

    2014-01-01

    have assumed that each biochemical operation in an application is characterized by a worst-case execution time (wcet). However, during the execution of the application, due to variability and randomness in biochemical reactions, operations may finish earlier than their wcets. In this paper we propose...... an online synthesis strategy that re-synthesizes the application at runtime when operations experience variability in their execution time, obtaining thus shorter application execution times. The proposed strategy has been evaluated using several benchmarks....

  10. Development and evaluation of a stochastic daily rainfall model with long-term variability

    Science.gov (United States)

    Kamal Chowdhury, A. F. M.; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony S.; Parana Manage, Nadeeka

    2017-12-01

    The primary objective of this study is to develop a stochastic rainfall generation model that can match not only the short resolution (daily) variability but also the longer resolution (monthly to multiyear) variability of observed rainfall. This study has developed a Markov chain (MC) model, which uses a two-state MC process with two parameters (wet-to-wet and dry-to-dry transition probabilities) to simulate rainfall occurrence and a gamma distribution with two parameters (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. Starting with the traditional MC-gamma model with deterministic parameters, this study has developed and assessed four other variants of the MC-gamma model with different parameterisations. The key finding is that if the parameters of the gamma distribution are randomly sampled each year from fitted distributions rather than fixed parameters with time, the variability of rainfall depths at both short and longer temporal resolutions can be preserved, while the variability of wet periods (i.e. number of wet days and mean length of wet spell) can be preserved by decadally varied MC parameters. This is a straightforward enhancement to the traditional simplest MC model and is both objective and parsimonious.

  11. Continuous performance task in ADHD: Is reaction time variability a key measure?

    Science.gov (United States)

    Levy, Florence; Pipingas, Andrew; Harris, Elizabeth V; Farrow, Maree; Silberstein, Richard B

    2018-01-01

    To compare the use of the Continuous Performance Task (CPT) reaction time variability (intraindividual variability or standard deviation of reaction time), as a measure of vigilance in attention-deficit hyperactivity disorder (ADHD), and stimulant medication response, utilizing a simple CPT X-task vs an A-X-task. Comparative analyses of two separate X-task vs A-X-task data sets, and subgroup analyses of performance on and off medication were conducted. The CPT X-task reaction time variability had a direct relationship to ADHD clinician severity ratings, unlike the CPT A-X-task. Variability in X-task performance was reduced by medication compared with the children's unmedicated performance, but this effect did not reach significance. When the coefficient of variation was applied, severity measures and medication response were significant for the X-task, but not for the A-X-task. The CPT-X-task is a useful clinical screening test for ADHD and medication response. In particular, reaction time variability is related to default mode interference. The A-X-task is less useful in this regard.

  12. Statistical Learning and Adaptive Decision-Making Underlie Human Response Time Variability in Inhibitory Control

    Directory of Open Access Journals (Sweden)

    Ning eMa

    2015-08-01

    Full Text Available Response time (RT is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task, in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop, and stop-signal onset time, SSD (stop-signal delay, with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop and SSD. The human behavioral data (n=20 bear out this prediction, showing P(stop and SSD both to be significant, independent predictors of RT, with P(stop being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.

  13. Statistical learning and adaptive decision-making underlie human response time variability in inhibitory control.

    Science.gov (United States)

    Ma, Ning; Yu, Angela J

    2015-01-01

    Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop), and stop-signal onset time, SSD (stop-signal delay), with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop) and SSD. The human behavioral data (n = 20) bear out this prediction, showing P(stop) and SSD both to be significant, independent predictors of RT, with P(stop) being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.

  14. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  15. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  16. The cost of travel time variability: three measures with properties

    DEFF Research Database (Denmark)

    Engelson, Leonid; Fosgerau, Mogens

    2016-01-01

    This paper explores the relationships between three types of measures of the cost of travel time variability: measures based on scheduling preferences and implicit departure time choice, Bernoulli type measures based on a univariate function of travel time, and mean-dispersion measures. We...

  17. Quadratic time dependent Hamiltonians and separation of variables

    International Nuclear Information System (INIS)

    Anzaldo-Meneses, A.

    2017-01-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green’s function is obtained and a comparison with the classical Hamilton–Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei–Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü–Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems. - Highlights: • Exact unitary transformation reducing time dependent quadratic quantum Hamiltonian to zero. • New separation of variables method and simultaneous uncoupling of modes. • Explicit examples of transformations for one to four dimensional problems. • New general evolution equation for quadratic form in the action, respectively Green’s function.

  18. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  19. Multinomial model and zero-inflated gamma model to study time spent on leisure time physical activity: an example of ELSA-Brasil.

    Science.gov (United States)

    Nobre, Aline Araújo; Carvalho, Marilia Sá; Griep, Rosane Härter; Fonseca, Maria de Jesus Mendes da; Melo, Enirtes Caetano Prates; Santos, Itamar de Souza; Chor, Dora

    2017-08-17

    To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil - Estudo Longitudinal de Saúde do Adulto ) have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week) spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income. The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.

  20. Selective attrition and intraindividual variability in response time moderate cognitive change.

    Science.gov (United States)

    Yao, Christie; Stawski, Robert S; Hultsch, David F; MacDonald, Stuart W S

    2016-01-01

    Selection of a developmental time metric is useful for understanding causal processes that underlie aging-related cognitive change and for the identification of potential moderators of cognitive decline. Building on research suggesting that time to attrition is a metric sensitive to non-normative influences of aging (e.g., subclinical health conditions), we examined reason for attrition and intraindividual variability (IIV) in reaction time as predictors of cognitive performance. Three hundred and four community dwelling older adults (64-92 years) completed annual assessments in a longitudinal study. IIV was calculated from baseline performance on reaction time tasks. Multilevel models were fit to examine patterns and predictors of cognitive change. We show that time to attrition was associated with cognitive decline. Greater IIV was associated with declines on executive functioning and episodic memory measures. Attrition due to personal health reasons was also associated with decreased executive functioning compared to that of individuals who remained in the study. These findings suggest that time to attrition is a useful metric for representing cognitive change, and reason for attrition and IIV are predictive of non-normative influences that may underlie instances of cognitive loss in older adults.

  1. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    Science.gov (United States)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  2. Automated optimization and construction of chemometric models based on highly variable raw chromatographic data.

    Science.gov (United States)

    Sinkov, Nikolai A; Johnston, Brandon M; Sandercock, P Mark L; Harynuk, James J

    2011-07-04

    Direct chemometric interpretation of raw chromatographic data (as opposed to integrated peak tables) has been shown to be advantageous in many circumstances. However, this approach presents two significant challenges: data alignment and feature selection. In order to interpret the data, the time axes must be precisely aligned so that the signal from each analyte is recorded at the same coordinates in the data matrix for each and every analyzed sample. Several alignment approaches exist in the literature and they work well when the samples being aligned are reasonably similar. In cases where the background matrix for a series of samples to be modeled is highly variable, the performance of these approaches suffers. Considering the challenge of feature selection, when the raw data are used each signal at each time is viewed as an individual, independent variable; with the data rates of modern chromatographic systems, this generates hundreds of thousands of candidate variables, or tens of millions of candidate variables if multivariate detectors such as mass spectrometers are utilized. Consequently, an automated approach to identify and select appropriate variables for inclusion in a model is desirable. In this research we present an alignment approach that relies on a series of deuterated alkanes which act as retention anchors for an alignment signal, and couple this with an automated feature selection routine based on our novel cluster resolution metric for the construction of a chemometric model. The model system that we use to demonstrate these approaches is a series of simulated arson debris samples analyzed by passive headspace extraction, GC-MS, and interpreted using partial least squares discriminant analysis (PLS-DA). Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Inter-model variability and biases of the global water cycle in CMIP3 coupled climate models

    International Nuclear Information System (INIS)

    Liepert, Beate G; Previdi, Michael

    2012-01-01

    Observed changes such as increasing global temperatures and the intensification of the global water cycle in the 20th century are robust results of coupled general circulation models (CGCMs). In spite of these successes, model-to-model variability and biases that are small in first order climate responses, however, have considerable implications for climate predictability especially when multi-model means are used. We show that most climate simulations of the 20th and 21st century A2 scenario performed with CMIP3 (Coupled Model Inter-comparison Project Phase 3) models have deficiencies in simulating the global atmospheric moisture balance. Large biases of only a few models (some biases reach the simulated global precipitation changes in the 20th and 21st centuries) affect the multi-model mean global moisture budget. An imbalanced flux of −0.14 Sv exists while the multi-model median imbalance is only −0.02 Sv. Moreover, for most models the detected imbalance changes over time. As a consequence, in 13 of the 18 CMIP3 models examined, global annual mean precipitation exceeds global evaporation, indicating that there should be a ‘leaking’ of moisture from the atmosphere whereas for the remaining five models a ‘flooding’ is implied. Nonetheless, in all models, the actual atmospheric moisture content and its variability correctly increases during the course of the 20th and 21st centuries. These discrepancies therefore imply an unphysical and hence ‘ghost’ sink/source of atmospheric moisture in the models whose atmospheres flood/leak. The ghost source/sink of moisture can also be regarded as atmospheric latent heating/cooling and hence as positive/negative perturbation of the atmospheric energy budget or non-radiative forcing in the range of −1 to +6 W m −2 (median +0.1 W m −2 ). The inter-model variability of the global atmospheric moisture transport from oceans to land areas, which impacts the terrestrial water cycle, is also quite high and ranges

  4. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  5. Hierarchical Hidden Markov Models for Multivariate Integer-Valued Time-Series

    DEFF Research Database (Denmark)

    Catania, Leopoldo; Di Mari, Roberto

    2018-01-01

    We propose a new flexible dynamic model for multivariate nonnegative integer-valued time-series. Observations are assumed to depend on the realization of two additional unobserved integer-valued stochastic variables which control for the time-and cross-dependence of the data. An Expectation......-Maximization algorithm for maximum likelihood estimation of the model's parameters is derived. We provide conditional and unconditional (cross)-moments implied by the model, as well as the limiting distribution of the series. A Monte Carlo experiment investigates the finite sample properties of our estimation...

  6. Doubly stochastic Poisson process models for precipitation at fine time-scales

    Science.gov (United States)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  7. An oilspill trajectory analysis model with a variable wind deflection angle

    Science.gov (United States)

    Samuels, W.B.; Huang, N.E.; Amstutz, D.E.

    1982-01-01

    The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.

  8. Effects of implementing time-variable postgraduate training programmes on the organization of teaching hospital departments.

    Science.gov (United States)

    van Rossum, Tiuri R; Scheele, Fedde; Sluiter, Henk E; Paternotte, Emma; Heyligers, Ide C

    2018-01-31

    As competency-based education has gained currency in postgraduate medical education, it is acknowledged that trainees, having individual learning curves, acquire the desired competencies at different paces. To accommodate their different learning needs, time-variable curricula have been introduced making training no longer time-bound. This paradigm has many consequences and will, predictably, impact the organization of teaching hospitals. The purpose of this study was to determine the effects of time-variable postgraduate education on the organization of teaching hospital departments. We undertook exploratory case studies into the effects of time-variable training on teaching departments' organization. We held semi-structured interviews with clinical teachers and managers from various hospital departments. The analysis yielded six effects: (1) time-variable training requires flexible and individual planning, (2) learners must be active and engaged, (3) accelerated learning sometimes comes at the expense of clinical expertise, (4) fast-track training for gifted learners jeopardizes the continuity of care, (5) time-variable training demands more of supervisors, and hence, they need protected time for supervision, and (6) hospital boards should support time-variable training. Implementing time-variable education affects various levels within healthcare organizations, including stakeholders not directly involved in medical education. These effects must be considered when implementing time-variable curricula.

  9. Constructing Proxy Variables to Measure Adult Learners' Time Management Strategies in LMS

    Science.gov (United States)

    Jo, Il-Hyun; Kim, Dongho; Yoon, Meehyun

    2015-01-01

    This study describes the process of constructing proxy variables from recorded log data within a Learning Management System (LMS), which represents adult learners' time management strategies in an online course. Based on previous research, three variables of total login time, login frequency, and regularity of login interval were selected as…

  10. Late time acceleration in a non-commutative model of modified cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Malekolkalami, B., E-mail: b.malakolkalami@uok.ac.ir [Department of Physics, University of Kurdistan, Pasdaran St., Sanandaj (Iran, Islamic Republic of); Atazadeh, K., E-mail: atazadeh@azaruniv.ac.ir [Department of Physics, Azarbaijan Shahid Madani University, 53714-161, Tabriz (Iran, Islamic Republic of); Vakili, B., E-mail: b-vakili@iauc.ac.ir [Department of Physics, Central Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of)

    2014-12-12

    We investigate the effects of non-commutativity between the position–position, position–momentum and momentum–momentum of a phase space corresponding to a modified cosmological model. We show that the existence of such non-commutativity results in a Moyal Poisson algebra between the phase space variables in which the product law between the functions is of the kind of an α-deformed product. We then transform the variables in such a way that the Poisson brackets between the dynamical variables take the form of a usual Poisson bracket but this time with a noncommutative structure. For a power law expression for the function of the Ricci scalar with which the action of the gravity model is modified, the exact solutions in the commutative and noncommutative cases are presented and compared. In terms of these solutions we address the issue of the late time acceleration in cosmic evolution.

  11. Late time acceleration in a non-commutative model of modified cosmology

    International Nuclear Information System (INIS)

    Malekolkalami, B.; Atazadeh, K.; Vakili, B.

    2014-01-01

    We investigate the effects of non-commutativity between the position–position, position–momentum and momentum–momentum of a phase space corresponding to a modified cosmological model. We show that the existence of such non-commutativity results in a Moyal Poisson algebra between the phase space variables in which the product law between the functions is of the kind of an α-deformed product. We then transform the variables in such a way that the Poisson brackets between the dynamical variables take the form of a usual Poisson bracket but this time with a noncommutative structure. For a power law expression for the function of the Ricci scalar with which the action of the gravity model is modified, the exact solutions in the commutative and noncommutative cases are presented and compared. In terms of these solutions we address the issue of the late time acceleration in cosmic evolution

  12. Time variability of C-reactive protein: implications for clinical risk stratification.

    Directory of Open Access Journals (Sweden)

    Peter Bogaty

    Full Text Available C-reactive protein (CRP is proposed as a screening test for predicting risk and guiding preventive approaches in coronary artery disease (CAD. However, the stability of repeated CRP measurements over time in subjects with and without CAD is not well defined. We sought to determine the stability of serial CRP measurements in stable subjects with distinct CAD manifestations and a group without CAD while carefully controlling for known confounders.We prospectively studied 4 groups of 25 stable subjects each 1 a history of recurrent acute coronary events; 2 a single myocardial infarction ≥7 years ago; 3 longstanding CAD (≥7 years that had never been unstable; 4 no CAD. Fifteen measurements of CRP were obtained to cover 21 time-points: 3 times during one day; 5 consecutive days; 4 consecutive weeks; 4 consecutive months; and every 3 months over the year. CRP risk threshold was set at 2.0 mg/L. We estimated variance across time-points using standard descriptive statistics and Bayesian hierarchical models.Median CRP values of the 4 groups and their pattern of variability did not differ substantially so all subjects were analyzed together. The median individual standard deviation (SD CRP values within-day, within-week, between-weeks and between-months were 0.07, 0.19, 0.36 and 0.63 mg/L, respectively. Forty-six percent of subjects changed CRP risk category at least once and 21% had ≥4 weekly and monthly CRP values in both low and high-risk categories.Considering its large intra-individual variability, it may be problematic to rely on CRP values for CAD risk prediction and therapeutic decision-making in individual subjects.

  13. Estimating inter-annual variability in winter wheat sowing dates from satellite time series in Camargue, France

    Science.gov (United States)

    Manfron, Giacinto; Delmotte, Sylvestre; Busetto, Lorenzo; Hossard, Laure; Ranghetti, Luigi; Brivio, Pietro Alessandro; Boschetti, Mirco

    2017-05-01

    Crop simulation models are commonly used to forecast the performance of cropping systems under different hypotheses of change. Their use on a regional scale is generally constrained, however, by a lack of information on the spatial and temporal variability of environment-related input variables (e.g., soil) and agricultural practices (e.g., sowing dates) that influence crop yields. Satellite remote sensing data can shed light on such variability by providing timely information on crop dynamics and conditions over large areas. This paper proposes a method for analyzing time series of MODIS satellite data in order to estimate the inter-annual variability of winter wheat sowing dates. A rule-based method was developed to automatically identify a reliable sample of winter wheat field time series, and to infer the corresponding sowing dates. The method was designed for a case study in the Camargue region (France), where winter wheat is characterized by vernalization, as in other temperate regions. The detection criteria were chosen on the grounds of agronomic expertise and by analyzing high-confidence time-series vegetation index profiles for winter wheat. This automatic method identified the target crop on more than 56% (four-year average) of the cultivated areas, with low commission errors (11%). It also captured the seasonal variability in sowing dates with errors of ±8 and ±16 days in 46% and 66% of cases, respectively. Extending the analysis to the years 2002-2012 showed that sowing in the Camargue was usually done on or around November 1st (±4 days). Comparing inter-annual sowing date variability with the main local agro-climatic drivers showed that the type of preceding crop and the weather conditions during the summer season before the wheat sowing had a prominent role in influencing winter wheat sowing dates.

  14. Aircraft model prototypes which have specified handling-quality time histories

    Science.gov (United States)

    Johnson, S. H.

    1978-01-01

    Several techniques for obtaining linear constant-coefficient airplane models from specified handling-quality time histories are discussed. The pseudodata method solves the basic problem, yields specified eigenvalues, and accommodates state-variable transfer-function zero suppression. The algebraic equations to be solved are bilinear, at worst. The disadvantages are reduced generality and no assurance that the resulting model will be airplane like in detail. The method is fully illustrated for a fourth-order stability-axis small motion model with three lateral handling quality time histories specified. The FORTRAN program which obtains and verifies the model is included and fully documented.

  15. Clustering Multivariate Time Series Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Shima Ghassempour

    2014-03-01

    Full Text Available In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs, where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers.

  16. Tracer transport in fractures: analysis of field data based on a variable - aperture channel model

    International Nuclear Information System (INIS)

    Tsang, C.F.; Tsang, Y.W.; Hale, F.V.

    1991-06-01

    A variable-aperture channel model is used as the basis to interpret data from a three-year tracer transport experiment in fractured rocks. The data come from the so-called Stripa-3D experiment performed by Neretnieks and coworkers. Within the framework of the variable-aperture channel conceptual model, tracers are envisioned as travelling along a number of variable-aperture flow channels, whose properties are related to the mean b - and standard deviation σ b of the fracture aperture distribution. Two methods are developed to address the presence of strong time variation of the tracer injection flow rate in this experiment. The first approximates the early part of the injection history by an exponential decay function and is applicable to the early time tracer breakthrough data. The second is a deconvolution method involving the use of Toeplitz matrices and is applicable over the complete period of variable injection of the tracers. Both methods give consistent results. These results include not only estimates of b and σ, but also ranges of Peclet numbers, dispersivity and an estimate of the number of channels involved in the tracer transport. An interesting and surprising observation is that the data indicate that the Peclet number increases with the mean travel time: i.e., dispersivity decreasing with mean travel time. This trend is consistent with calculated results of tracer transport in multiple variable-aperture fractures in series. The meaning of this trend is discussed in terms of the strong heterogeneity of the flow system. (au) (22 refs.)

  17. An introduction to latent variable growth curve modeling concepts, issues, and application

    CERN Document Server

    Duncan, Terry E; Strycker, Lisa A

    2013-01-01

    This book provides a comprehensive introduction to latent variable growth curve modeling (LGM) for analyzing repeated measures. It presents the statistical basis for LGM and its various methodological extensions, including a number of practical examples of its use. It is designed to take advantage of the reader's familiarity with analysis of variance and structural equation modeling (SEM) in introducing LGM techniques. Sample data, syntax, input and output, are provided for EQS, Amos, LISREL, and Mplus on the book's CD. Throughout the book, the authors present a variety of LGM techniques that are useful for many different research designs, and numerous figures provide helpful diagrams of the examples.Updated throughout, the second edition features three new chapters-growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group is...

  18. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  19. Stochastic modeling of the Fermi/LAT γ-ray blazar variability

    Energy Technology Data Exchange (ETDEWEB)

    Sobolewska, M. A.; Siemiginowska, A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Kelly, B. C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93107 (United States); Nalewajko, K., E-mail: malgosia@camk.edu.pl [JILA, University of Colorado and National Institute of Standards and Technology, 440 UCB, Boulder, CO 80309 (United States)

    2014-05-10

    We study the γ-ray variability of 13 blazars observed with the Fermi/Large Area Telescope (LAT). These blazars have the most complete light curves collected during the first four years of the Fermi sky survey. We model them with the Ornstein-Uhlenbeck (OU) process or a mixture of the OU processes. The OU process has power spectral density (PSD) proportional to 1/f {sup α} with α changing at a characteristic timescale, τ{sub 0}, from 0 (τ >> τ{sub 0}) to 2 (τ << τ{sub 0}). The PSD of the mixed OU process has two characteristic timescales and an additional intermediate region with 0 < α < 2. We show that the OU model provides a good description of the Fermi/LAT light curves of three blazars in our sample. For the first time, we constrain a characteristic γ-ray timescale of variability in two BL Lac sources, 3C 66A and PKS 2155-304 (τ{sub 0} ≅ 25 days and τ{sub 0} ≅ 43 days, respectively, in the observer's frame), which are longer than the soft X-ray timescales detected in blazars and Seyfert galaxies. We find that the mixed OU process approximates the light curves of the remaining 10 blazars better than the OU process. We derive limits on their long and short characteristic timescales, and infer that their Fermi/LAT PSD resemble power-law functions. We constrain the PSD slopes for all but one source in the sample. We find hints for sub-hour Fermi/LAT variability in four flat spectrum radio quasars. We discuss the implications of our results for theoretical models of blazar variability.

  20. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  1. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    International Nuclear Information System (INIS)

    Fu, Y; Xu, O; Yang, W; Zhou, L; Wang, J

    2017-01-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately. (paper)

  2. Bus Travel Time Deviation Analysis Using Automatic Vehicle Location Data and Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Xiaolin Gong

    2015-01-01

    Full Text Available To investigate the influences of causes of unreliability and bus schedule recovery phenomenon on microscopic segment-level travel time variance, this study adopts Structural Equation Modeling (SEM to specify, estimate, and measure the theoretical proposed models. The SEM model establishes and verifies hypotheses for interrelationships among travel time deviations, departure delays, segment lengths, dwell times, and number of traffic signals and access connections. The finally accepted model demonstrates excellent fitness. Most of the hypotheses are supported by the sample dataset from bus Automatic Vehicle Location system. The SEM model confirms the bus schedule recovery phenomenon. The departure delays at bus terminals and upstream travel time deviations indeed have negative impacts on travel time fluctuation of buses en route. Meanwhile, the segment length directly and negatively impacts travel time variability and inversely positively contributes to the schedule recovery process; this exogenous variable also indirectly and positively influences travel times through the existence of signalized intersections and access connections. This study offers a rational approach to analyzing travel time deviation feature. The SEM model structure and estimation results facilitate the understanding of bus service performance characteristics and provide several implications for bus service planning, management, and operation.

  3. Latent variable modeling%建立隐性变量模型

    Institute of Scientific and Technical Information of China (English)

    蔡力

    2012-01-01

    @@ A latent variable model, as the name suggests,is a statistical model that contains latent, that is, unobserved, variables.Their roots go back to Spearman's 1904 seminal work[1] on factor analysis,which is arguably the first well-articulated latent variable model to be widely used in psychology, mental health research, and allied disciplines.Because of the association of factor analysis with early studies of human intelligence, the fact that key variables in a statistical model are, on occasion, unobserved has been a point of lingering contention and controversy.The reader is assured, however, that a latent variable,defined in the broadest manner, is no more mysterious than an error term in a normal theory linear regression model or a random effect in a mixed model.

  4. Galactic models with variable spiral structure

    International Nuclear Information System (INIS)

    James, R.A.; Sellwood, J.A.

    1978-01-01

    A series of three-dimensional computer simulations of disc galaxies has been run in which the self-consistent potential of the disc stars is supplemented by that arising from a small uniform Population II sphere. The models show variable spiral structure, which is more pronounced for thin discs. In addition, the thin discs form weak bars. In one case variable spiral structure associated with this bar has been seen. The relaxed discs are cool outside resonance regions. (author)

  5. Modeling Time-Dependent Association in Longitudinal Data: A Lag as Moderator Approach

    Science.gov (United States)

    Selig, James P.; Preacher, Kristopher J.; Little, Todd D.

    2012-01-01

    We describe a straightforward, yet novel, approach to examine time-dependent association between variables. The approach relies on a measurement-lag research design in conjunction with statistical interaction models. We base arguments in favor of this approach on the potential for better understanding the associations between variables by…

  6. Simulating variable-density flows with time-consistent integration of Navier-Stokes equations

    Science.gov (United States)

    Lu, Xiaoyi; Pantano, Carlos

    2017-11-01

    In this talk, we present several features of a high-order semi-implicit variable-density low-Mach Navier-Stokes solver. A new formulation to solve pressure Poisson-like equation of variable-density flows is highlighted. With this formulation of the numerical method, we are able to solve all variables with a uniform order of accuracy in time (consistent with the time integrator being used). The solver is primarily designed to perform direct numerical simulations for turbulent premixed flames. Therefore, we also address other important elements, such as energy-stable boundary conditions, synthetic turbulence generation, and flame anchoring method. Numerical examples include classical non-reacting constant/variable-density flows, as well as turbulent premixed flames.

  7. Multinomial model and zero-inflated gamma model to study time spent on leisure time physical activity: an example of ELSA-Brasil

    Directory of Open Access Journals (Sweden)

    Aline Araújo Nobre

    2017-08-01

    Full Text Available ABSTRACT OBJECTIVE To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. METHODS Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil – Estudo Longitudinal de Saúde do Adulto have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. RESULTS The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income CONCLUSIONS The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.

  8. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational ...

  9. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  10. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  11. Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Walko, Robert [Univ. of Miami, Coral Gables, FL (United States)

    2016-11-07

    The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of the atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.

  12. DYNAMIC RESPONSE OF THICK PLATES ON TWO PARAMETER ELASTIC FOUNDATION UNDER TIME VARIABLE LOADING

    OpenAIRE

    Ozgan, Korhan; Daloglu, Ayse T.

    2014-01-01

    In this paper, behavior of foundation plates with transverse shear deformation under time variable loading is presented using modified Vlasov foundation model. Finite element formulation of thick plates on elastic foundation is derived by using an 8-noded finite element based on Mindlin plate theory. Selective reduced integration technique is used to avoid shear locking problem which arises when smaller plate thickness is considered for the evaluation of the stiffness matrices. After comparis...

  13. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    Science.gov (United States)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  14. Bidecadal North Atlantic ocean circulation variability controlled by timing of volcanic eruptions.

    Science.gov (United States)

    Swingedouw, Didier; Ortega, Pablo; Mignot, Juliette; Guilyardi, Eric; Masson-Delmotte, Valérie; Butler, Paul G; Khodri, Myriam; Séférian, Roland

    2015-03-30

    While bidecadal climate variability has been evidenced in several North Atlantic paleoclimate records, its drivers remain poorly understood. Here we show that the subset of CMIP5 historical climate simulations that produce such bidecadal variability exhibits a robust synchronization, with a maximum in Atlantic Meridional Overturning Circulation (AMOC) 15 years after the 1963 Agung eruption. The mechanisms at play involve salinity advection from the Arctic and explain the timing of Great Salinity Anomalies observed in the 1970s and the 1990s. Simulations, as well as Greenland and Iceland paleoclimate records, indicate that coherent bidecadal cycles were excited following five Agung-like volcanic eruptions of the last millennium. Climate simulations and a conceptual model reveal that destructive interference caused by the Pinatubo 1991 eruption may have damped the observed decreasing trend of the AMOC in the 2000s. Our results imply a long-lasting climatic impact and predictability following the next Agung-like eruption.

  15. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  16. Evidence for a time-invariant phase variable in human ankle control.

    Directory of Open Access Journals (Sweden)

    Robert D Gregg

    Full Text Available Human locomotion is a rhythmic task in which patterns of muscle activity are modulated by state-dependent feedback to accommodate perturbations. Two popular theories have been proposed for the underlying embodiment of phase in the human pattern generator: a time-dependent internal representation or a time-invariant feedback representation (i.e., reflex mechanisms. In either case the neuromuscular system must update or represent the phase of locomotor patterns based on the system state, which can include measurements of hundreds of variables. However, a much simpler representation of phase has emerged in recent designs for legged robots, which control joint patterns as functions of a single monotonic mechanical variable, termed a phase variable. We propose that human joint patterns may similarly depend on a physical phase variable, specifically the heel-to-toe movement of the Center of Pressure under the foot. We found that when the ankle is unexpectedly rotated to a position it would have encountered later in the step, the Center of Pressure also shifts forward to the corresponding later position, and the remaining portion of the gait pattern ensues. This phase shift suggests that the progression of the stance ankle is controlled by a biomechanical phase variable, motivating future investigations of phase variables in human locomotor control.

  17. Modelling the co-evolution of indirect genetic effects and inherited variability.

    Science.gov (United States)

    Marjanovic, Jovana; Mulder, Han A; Rönnegård, Lars; Bijma, Piter

    2018-03-28

    When individuals interact, their phenotypes may be affected not only by their own genes but also by genes in their social partners. This phenomenon is known as Indirect Genetic Effects (IGEs). In aquaculture species and some plants, however, competition not only affects trait levels of individuals, but also inflates variability of trait values among individuals. In the field of quantitative genetics, the variability of trait values has been studied as a quantitative trait in itself, and is often referred to as inherited variability. Such studies, however, consider only the genetic effect of the focal individual on trait variability and do not make a connection to competition. Although the observed phenotypic relationship between competition and variability suggests an underlying genetic relationship, the current quantitative genetic models of IGE and inherited variability do not allow for such a relationship. The lack of quantitative genetic models that connect IGEs to inherited variability limits our understanding of the potential of variability to respond to selection, both in nature and agriculture. Models of trait levels, for example, show that IGEs may considerably change heritable variation in trait values. Currently, we lack the tools to investigate whether this result extends to variability of trait values. Here we present a model that integrates IGEs and inherited variability. In this model, the target phenotype, say growth rate, is a function of the genetic and environmental effects of the focal individual and of the difference in trait value between the social partner and the focal individual, multiplied by a regression coefficient. The regression coefficient is a genetic trait, which is a measure of cooperation; a negative value indicates competition, a positive value cooperation, and an increasing value due to selection indicates the evolution of cooperation. In contrast to the existing quantitative genetic models, our model allows for co-evolution of

  18. Attempt to determine radon entry rate and air exchange rate variable in time from the time course of indoor radon concentration

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, J [State Office for Nuclear Protection, Prague (Czech Republic)

    1996-12-31

    For radon diagnosis in houses the `ventilation experiment` was used as a standard method. After removal of indoor radon by draught the build-up of radon concentration a(t) [Bq/m{sup 3}] was measured continuously and from the time course the constant radon entry rate A [Bq/h] and the exchange rate k [h{sup -1}] was calculated by regression analysis using model relation a(t) A(1-e{sup -kt})/kV with V [m{sup 3}] for volume of the room. The conditions have to be stable for several hours so that the assumption of constant A and k was justified. During the day both quantities were independently (?) changing, therefore a method to determine variable entry rate A(t) and exchange rate k(t) is needed for a better understanding of the variability of the indoor radon concentration. Two approaches are given for the determination of variable in time radon entry rates and air exchange rates from continuously measured indoor radon concentration - numerical solution of the equivalent difference equations in deterministic or statistic form. The approaches are not always successful. Failures giving a right ration for the searched rates but not of the rates them self could not be explained.

  19. Modeling and optimization of process variables of wire-cut electric discharge machining of super alloy Udimet-L605

    Directory of Open Access Journals (Sweden)

    Somvir Singh Nain

    2017-02-01

    Full Text Available This paper presents the behavior of Udimet-L605 after wire electric discharge machining and evaluating the WEDM process using sophisticated machine learning approaches. The experimental work is depicted on the basis of Taguchi orthogonal L27 array, considering six input variables and three interactions. Three models such as support vector machine algorithms based on PUK kernel, non-linear regression and multi-linear regression have been proposed to examine the variance between experimental and predicted outcome and preferred the preeminent model based on its evaluation parameters performance and graph analysis. The grey relational analysis is the relevant approach to obtain the best grouping of input variables for maximum material removal rate and minimum surface roughness. Based on statistical analysis, it has been concluded that pulse-on time, interaction between pulse-on time x pulse-off time, spark-gap voltage and wire tension are the momentous variable for surface roughness while the pulse-on time, spark-gap voltage and pulse-off time are the momentous variables for material removal rate. The micro structural and compositional changes on the surface of work material were examined by means of SEM and EDX analysis. The thickness of the white layer and the recast layer formation increases with increases in the pulse-on time duration.

  20. Hydrological excitation of polar motion by different variables from the GLDAS models

    Science.gov (United States)

    Winska, Malgorzata; Nastula, Jolanta; Salstein, David

    2017-12-01

    Continental hydrological loading by land water, snow and ice is a process that is important for the full understanding of the excitation of polar motion. In this study, we compute different estimations of hydrological excitation functions of polar motion (as hydrological angular momentum, HAM) using various variables from the Global Land Data Assimilation System (GLDAS) models of the land-based hydrosphere. The main aim of this study is to show the influence of variables from different hydrological processes including evapotranspiration, runoff, snowmelt and soil moisture, on polar motion excitations at annual and short-term timescales. Hydrological excitation functions of polar motion are determined using selected variables of these GLDAS realizations. Furthermore, we use time-variable gravity field solutions from the Gravity Recovery and Climate Experiment (GRACE) to determine the hydrological mass effects on polar motion excitation. We first conduct an intercomparison of the maps of variations of regional hydrological excitation functions, timing and phase diagrams of different regional and global HAMs. Next, we estimate the hydrological signal in geodetically observed polar motion excitation as a residual by subtracting the contributions of atmospheric angular momentum and oceanic angular momentum. Finally, the hydrological excitations are compared with those hydrological signals determined from residuals of the observed polar motion excitation series. The results will help us understand the relative importance of polar motion excitation within the individual hydrological processes, based on hydrological modeling. This method will allow us to estimate how well the polar motion excitation budget in the seasonal and inter-annual spectral ranges can be closed.

  1. Groundwater travel time uncertainty analysis. Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1985-03-01

    This study examines the sensitivity of the travel time distribution predicted by a reference case model to (1) scale of representation of the model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross correlations between transmissivity and effective thickness. The basis for the reference model is the preliminary stochastic travel time model previously documented by the Basalt Waste Isolation Project. Results of this study show the following. The variability of the predicted travel times can be adequately represented when the ratio between the size of the zones used to represent the model parameters and the log-transmissivity correlation range is less than about one-fifth. The size of the model domain and the types of boundary conditions can have a strong impact on the distribution of travel times. Longer log-transmissivity correlation ranges cause larger variability in the predicted travel times. Positive cross correlation between transmissivity and effective thickness causes a decrease in the travel time variability. These results demonstrate the need for a sound conceptual model prior to conducting a stochastic travel time analysis

  2. Predictor variables for a half marathon race time in recreational male runners

    Directory of Open Access Journals (Sweden)

    Rüst CA

    2011-08-01

    Full Text Available Christoph Alexander Rüst1, Beat Knechtle1,2, Patrizia Knechtle2, Ursula Barandun1, Romuald Lepers3, Thomas Rosemann11Institute of General Practice and Health Services Research, University of Zurich, Zurich, Switzerland; 2Gesundheitszentrum St Gallen, St Gallen, Switzerland; 3INSERM U887, University of Burgundy, Faculty of Sport Sciences, Dijon, FranceAbstract: The aim of this study was to investigate predictor variables of anthropometry, training, and previous experience in order to predict a half marathon race time for future novice recreational male half marathoners. Eighty-four male finishers in the ‘Half Marathon Basel’ completed the race distance within (mean and standard deviation, SD 103.9 (16.5 min, running at a speed of 12.7 (1.9 km/h. After multivariate analysis of the anthropometric characteristics, body mass index (r = 0.56, suprailiacal (r = 0.36 and medial calf skin fold (r = 0.53 were related to race time. For the variables of training and previous experience, speed in running of the training sessions (r = –0.54 were associated with race time. After multivariate analysis of both the significant anthropometric and training variables, body mass index (P = 0.0150 and speed in running during training (P = 0.0045 were related to race time. Race time in a half marathon might be partially predicted by the following equation (r2 = 0.44: Race time (min = 72.91 + 3.045 * (body mass index, kg/m2 –3.884 * (speed in running during training, km/h for recreational male runners. To conclude, variables of both anthropometry and training were related to half marathon race time in recreational male half marathoners and cannot be reduced to one single predictor variable.Keywords: anthropometry, body fat, skin-folds, training, endurance

  3. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)

    2017-11-01

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.

  4. Modelling food-web mediated effects of hydrological variability and environmental flows.

    Science.gov (United States)

    Robson, Barbara J; Lester, Rebecca E; Baldwin, Darren S; Bond, Nicholas R; Drouart, Romain; Rolls, Robert J; Ryder, Darren S; Thompson, Ross M

    2017-11-01

    Environmental flows are designed to enhance aquatic ecosystems through a variety of mechanisms; however, to date most attention has been paid to the effects on habitat quality and life-history triggers, especially for fish and vegetation. The effects of environmental flows on food webs have so far received little attention, despite food-web thinking being fundamental to understanding of river ecosystems. Understanding environmental flows in a food-web context can help scientists and policy-makers better understand and manage outcomes of flow alteration and restoration. In this paper, we consider mechanisms by which flow variability can influence and alter food webs, and place these within a conceptual and numerical modelling framework. We also review the strengths and weaknesses of various approaches to modelling the effects of hydrological management on food webs. Although classic bioenergetic models such as Ecopath with Ecosim capture many of the key features required, other approaches, such as biogeochemical ecosystem modelling, end-to-end modelling, population dynamic models, individual-based models, graph theory models, and stock assessment models are also relevant. In many cases, a combination of approaches will be useful. We identify current challenges and new directions in modelling food-web responses to hydrological variability and environmental flow management. These include better integration of food-web and hydraulic models, taking physiologically-based approaches to food quality effects, and better representation of variations in space and time that may create ecosystem control points. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  5. Global modeling of land water and energy balances. Part III: Interannual variability

    Science.gov (United States)

    Shmakin, A.B.; Milly, P.C.D.; Dunne, K.A.

    2002-01-01

    The Land Dynamics (LaD) model is tested by comparison with observations of interannual variations in discharge from 44 large river basins for which relatively accurate time series of monthly precipitation (a primary model input) have recently been computed. When results are pooled across all basins, the model explains 67% of the interannual variance of annual runoff ratio anomalies (i.e., anomalies of annual discharge volume, normalized by long-term mean precipitation volume). The new estimates of basin precipitation appear to offer an improvement over those from a state-of-the-art analysis of global precipitation (the Climate Prediction Center Merged Analysis of Precipitation, CMAP), judging from comparisons of parallel model runs and of analyses of precipitation-discharge correlations. When the new precipitation estimates are used, the performance of the LaD model is comparable to, but not significantly better than, that of a simple, semiempirical water-balance relation that uses only annual totals of surface net radiation and precipitation. This implies that the LaD simulations of interannual runoff variability do not benefit substantially from information on geographical variability of land parameters or seasonal structure of interannual variability of precipitation. The aforementioned analyses necessitated the development of a method for downscaling of long-term monthly precipitation data to the relatively short timescales necessary for running the model. The method merges the long-term data with a reference dataset of 1-yr duration, having high temporal resolution. The success of the method, for the model and data considered here, was demonstrated in a series of model-model comparisons and in the comparisons of modeled and observed interannual variations of basin discharge.

  6. An alternative approach to exact wave functions for time-dependent coupled oscillator model of charged particle in variable magnetic field

    International Nuclear Information System (INIS)

    Menouar, Salah; Maamache, Mustapha; Choi, Jeong Ryeol

    2010-01-01

    The quantum states of time-dependent coupled oscillator model for charged particles subjected to variable magnetic field are investigated using the invariant operator methods. To do this, we have taken advantage of an alternative method, so-called unitary transformation approach, available in the framework of quantum mechanics, as well as a generalized canonical transformation method in the classical regime. The transformed quantum Hamiltonian is obtained using suitable unitary operators and is represented in terms of two independent harmonic oscillators which have the same frequencies as that of the classically transformed one. Starting from the wave functions in the transformed system, we have derived the full wave functions in the original system with the help of the unitary operators. One can easily take a complete description of how the charged particle behaves under the given Hamiltonian by taking advantage of these analytical wave functions.

  7. Variability of the 2014-present inflation source at Mauna Loa volcano revealed using time-dependent modeling

    Science.gov (United States)

    Johanson, I. A.; Miklius, A.; Okubo, P.; Montgomery-Brown, E. K.

    2017-12-01

    Mauna Loa volcano is the largest active volcano on earth and in the 20thcentury produced roughly one eruption every seven years. The 33-year quiescence since its last eruption 1984 has been punctuated by three inflation episodes where magma likely entered the shallow plumbing system, but was not erupted. The most recent began in 2014 and is ongoing. Unlike prior inflation episodes, the current one is accompanied by a significant increase in shallow seismicity, a pattern that is similar to earlier pre-eruptive periods. We apply the Kalman filter based Network Inversion Filter (NIF) to the 2014-present inflation episode using data from a 27 station continuous GPS network on Mauna Loa. The model geometry consists of a point volume source and tabular, dike-like body, which have previously been shown to provide a good fit to deformation data from a 2004-2009 inflation episode. The tabular body is discretized into 1km x 1km segments. For each day, the NIF solves for the rates of opening on the tabular body segments (subject to smoothing and positivity constraints), volume change rate in the point source, and slip rate on a deep décollement fault surface, which is constrained to a constant (no transient slip allowed). The Kalman filter in the NIF provides for smoothing both forwards and backwards in time. The model shows that the 2014-present inflation episode occurred as several sub-events, rather than steady inflation. It shows some spatial variability in the location of the inflation sub-events. In the model, opening in the tabular body is initially concentrated below the volcano's summit, in an area roughly outlined by shallow seismicity. In October, 2015 opening in the tabular body shifts to be centered beneath the southwest portion of the summit and seismicity becomes concentrated in this area. By late 2016, the opening rate on the tabular body decreases and is once again under the central part of summit. This modeling approach has allowed us to track these

  8. The necessity of connection structures in neural models of variable binding.

    Science.gov (United States)

    van der Velde, Frank; de Kamps, Marc

    2015-08-01

    In his review of neural binding problems, Feldman (Cogn Neurodyn 7:1-11, 2013) addressed two types of models as solutions of (novel) variable binding. The one type uses labels such as phase synchrony of activation. The other ('connectivity based') type uses dedicated connections structures to achieve novel variable binding. Feldman argued that label (synchrony) based models are the only possible candidates to handle novel variable binding, whereas connectivity based models lack the flexibility required for that. We argue and illustrate that Feldman's analysis is incorrect. Contrary to his conclusion, connectivity based models are the only viable candidates for models of novel variable binding because they are the only type of models that can produce behavior. We will show that the label (synchrony) based models analyzed by Feldman are in fact examples of connectivity based models. Feldman's analysis that novel variable binding can be achieved without existing connection structures seems to result from analyzing the binding problem in a wrong frame of reference, in particular in an outside instead of the required inside frame of reference. Connectivity based models can be models of novel variable binding when they possess a connection structure that resembles a small-world network, as found in the brain. We will illustrate binding with this type of model with episode binding and the binding of words, including novel words, in sentence structures.

  9. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  10. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  11. Space-time modeling of soil moisture

    Science.gov (United States)

    Chen, Zijuan; Mohanty, Binayak P.; Rodriguez-Iturbe, Ignacio

    2017-11-01

    A physically derived space-time mathematical representation of the soil moisture field is carried out via the soil moisture balance equation driven by stochastic rainfall forcing. The model incorporates spatial diffusion and in its original version, it is shown to be unable to reproduce the relative fast decay in the spatial correlation functions observed in empirical data. This decay resulting from variations in local topography as well as in local soil and vegetation conditions is well reproduced via a jitter process acting multiplicatively over the space-time soil moisture field. The jitter is a multiplicative noise acting on the soil moisture dynamics with the objective to deflate its correlation structure at small spatial scales which are not embedded in the probabilistic structure of the rainfall process that drives the dynamics. These scales of order of several meters to several hundred meters are of great importance in ecohydrologic dynamics. Properties of space-time correlation functions and spectral densities of the model with jitter are explored analytically, and the influence of the jitter parameters, reflecting variabilities of soil moisture at different spatial and temporal scales, is investigated. A case study fitting the derived model to a soil moisture dataset is presented in detail.

  12. Squeezing more information out of time variable gravity data with a temporal decomposition approach

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Bordoni, A.; Aoudia, A.

    2012-01-01

    an explorative approach based on a suitable time series decomposition, which does not rely on predefined time signatures. The comparison and validation against the fitting approach commonly used in GRACE literature shows a very good agreement for what concerns trends and periodic signals on one side......A measure of the Earth's gravity contains contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished both in time and space. After more than 7years, the GRACE gravity data available now support more elaborate analysis on the time series. We propose...... used to assess the possibility of finding evidence of meaningful geophysical signals different from hydrology over Africa in GRACE data. In this case we conclude that hydrological phenomena are dominant and so time variable gravity data in Africa can be directly used to calibrate hydrological models....

  13. Influence of variable heat transfer coefficient of fireworks and crackers on thermal explosion critical ambient temperature and time to ignition

    Directory of Open Access Journals (Sweden)

    Guo Zerong

    2016-01-01

    Full Text Available To study the effect of variable heat transfer coefficient of fireworks and crackers on thermal explosion critical ambient temperature and time to ignition, considering the heat transfer coefficient as the power function of temperature, mathematical thermal explosion steady state and unsteady-state model of finite cylindrical fireworks and crackers with complex shell structures are established based on two-dimensional steady state thermal explosion theory. The influence of variable heat transfer coefficient on thermal explosion critical ambient temperature and time to ignition are analyzed. When heat transfer coefficient is changing with temperature and in the condition of natural convection heat transfer, critical ambient temperature lessen, thermal explosion time to ignition shorten. If ambient temperature is close to critical ambient temperature, the influence of variable heat transfer coefficient on time to ignition become large. For firework with inner barrel in example analysis, the critical ambient temperature of propellant is 463.88 K and the time to ignition is 4054.9s at 466 K, 0.26 K and 450.8s less than without considering the change of heat transfer coefficient respectively. The calculation results show that the influence of variable heat transfer coefficient on thermal explosion time to ignition is greater in this example. Therefore, the effect of variable heat transfer coefficient should be considered into thermal safety evaluation of fireworks to reduce potential safety hazard.

  14. Probabilistic model for the spoilage wine yeast Dekkera bruxellensis as a function of pH, ethanol and free SO2 using time as a dummy variable.

    Science.gov (United States)

    Sturm, M E; Arroyo-López, F N; Garrido-Fernández, A; Querol, A; Mercado, L A; Ramirez, M L; Combina, M

    2014-01-17

    The present study uses a probabilistic model to determine the growth/no growth interfaces of the spoilage wine yeast Dekkera bruxellensis CH29 as a function of ethanol (10-15%, v/v), pH (3.4-4.0) and free SO2 (0-50 mg/l) using time (7, 14, 21 and 30 days) as a dummy variable. The model, built with a total of 756 growth/no growth data obtained in a simile wine medium, could have application in the winery industry to determine the wine conditions needed to inhibit the growth of this species. Thereby, at 12.5% of ethanol and pH 3.7 for a growth probability of 0.01, it is necessary to add 30 mg/l of free SO2 to inhibit yeast growth for 7 days. However, the concentration of free SO2 should be raised to 48 mg/l to achieve a probability of no growth of 0.99 for 30 days under the same wine conditions. Other combinations of environmental variables can also be determined using the mathematical model depending on the needs of the industry. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Use of variability modes to evaluate AR4 climate models over the Euro-Atlantic region

    Energy Technology Data Exchange (ETDEWEB)

    Casado, M.J.; Pastor, M.A. [Agencia Estatal de Meteorologia (AEMET), Madrid (Spain)

    2012-01-15

    This paper analyzes the ability of the multi-model simulations from the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) to simulate the main leading modes of variability over the Euro-Atlantic region in winter: the North-Atlantic Oscillation (NAO), the Scandinavian mode (SCAND), the East/Atlantic Oscillation (EA) and the East Atlantic/Western Russia mode (EA/WR). These modes of variability have been evaluated both spatially, by analyzing the intensity and location of their anomaly centres, as well as temporally, by focusing on the probability density functions and e-folding time scales. The choice of variability modes as a tool for climate model assessment can be justified by the fact that modes of variability determine local climatic conditions and their likely change may have important implications for future climate changes. It is found that all the models considered are able to simulate reasonably well these four variability modes, the SCAND being the mode which is best spatially simulated. From a temporal point of view the NAO and SCAND modes are the best simulated. UKMO-HadGEM1 and CGCM3.1(T63) are the models best at reproducing spatial characteristics, whereas CCSM3 and CGCM3.1(T63) are the best ones with regard to the temporal features. GISS-AOM is the model showing the worst performance, in terms of both spatial and temporal features. These results may bring new insight into the selection and use of specific models to simulate Euro-Atlantic climate, with some models being clearly more successful in simulating patterns of temporal and spatial variability than others. (orig.)

  16. Between-centre variability versus variability over time in DXA whole body measurements evaluated using a whole body phantom

    Energy Technology Data Exchange (ETDEWEB)

    Louis, Olivia [Department of Radiology, AZ-VUB, Vrije Universiteit Brussel, Laarbeeklaan 101, 1090 Brussel (Belgium)]. E-mail: olivia.louis@az.vub.ac.be; Verlinde, Siska [Belgian Study Group for Pediatric Endocrinology (Belgium); Thomas, Muriel [Belgian Study Group for Pediatric Endocrinology (Belgium); De Schepper, Jean [Department of Pediatrics, AZ-VUB, Vrije Universiteit Brussel, Laarbeeklaan 101, 1090 Brussel (Belgium)

    2006-06-15

    This study aimed to compare the variability of whole body measurements, using dual energy X-ray absorptiometry (DXA), among geographically distinct centres versus that over time in a given centre. A Hologic-designed 28 kg modular whole body phantom was used, including high density polyethylene, gray polyvinylchloride and aluminium. It was scanned on seven Hologic QDR 4500 DXA devices, located in seven centres and was also repeatedly (n = 18) scanned in the reference centre, over a time span of 5 months. The mean between-centre coefficient of variation (CV) ranged from 2.0 (lean mass) to 5.6% (fat mass) while the mean within-centre CV ranged from 0.3 (total mass) to 4.7% (total area). Between-centre variability compared well with within-centre variability for total area, bone mineral content and bone mineral density, but was significantly higher for fat (p < 0.001), lean (p < 0.005) and total mass (p < 0.001). Our results suggest that, even when using the same device, the between-centre variability remains a matter of concern, particularly where body composition is concerned.

  17. Combination of a higher-tier flow-through system and population modeling to assess the effects of time-variable exposure of isoproturon on the green algae Desmodesmus subspicatus and Pseudokirchneriella subcapitata.

    Science.gov (United States)

    Weber, Denis; Schaefer, Dieter; Dorgerloh, Michael; Bruns, Eric; Goerlitz, Gerhard; Hammel, Klaus; Preuss, Thomas G; Ratte, Hans Toni

    2012-04-01

    A flow-through system was developed to investigate the effects of time-variable exposure of pesticides on algae. A recently developed algae population model was used for simulations supported and verified by laboratory experiments. Flow-through studies with Desmodesmus subspicatus and Pseudokirchneriella subcapitata under time-variable exposure to isoproturon were performed, in which the exposure patterns were based on the results of FOrum for Co-ordination of pesticide fate models and their USe (FOCUS) model calculations for typical exposure situations via runoff or drain flow. Different types of pulsed exposure events were realized, including a whole range of repeated pulsed and steep peaks as well as periods of constant exposure. Both species recovered quickly in terms of growth from short-term exposure and according to substance dissipation from the system. Even at a peak 10 times the maximum predicted environmental concentration of isoproturon, only transient effects occurred on algae populations. No modified sensitivity or reduced growth was observed after repeated exposure. Model predictions of algal growth in the flow-through tests agreed well with the experimental data. The experimental boundary conditions and the physiological properties of the algae were used as the only model input. No calibration or parameter fitting was necessary. The combination of the flow-through experiments with the algae population model was revealed to be a powerful tool for the assessment of pulsed exposure on algae. It allowed investigating the growth reduction and recovery potential of algae after complex exposure, which is not possible with standard laboratory experiments alone. The results of the combined approach confirm the beneficial use of population models as supporting tools in higher-tier risk assessments of pesticides. Copyright © 2012 SETAC.

  18. Dynamic and Regression Modeling of Ocean Variability in the Tide-Gauge Record at Seasonal and Longer Periods

    Science.gov (United States)

    Hill, Emma M.; Ponte, Rui M.; Davis, James L.

    2007-01-01

    Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.

  19. Using exogenous variables in testing for monotonic trends in hydrologic time series

    Science.gov (United States)

    Alley, William M.

    1988-01-01

    One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.

  20. Stability of Delayed Hopfield Neural Networks with Variable-Time Impulses

    Directory of Open Access Journals (Sweden)

    Yangjun Pei

    2014-01-01

    Full Text Available In this paper the globally exponential stability criteria of delayed Hopfield neural networks with variable-time impulses are established. The proposed criteria can also be applied in Hopfield neural networks with fixed-time impulses. A numerical example is presented to illustrate the effectiveness of our theoretical results.

  1. Improved variable reduction in partial least squares modelling based on predictive-property-ranked variables and adaptation of partial least squares complexity.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2011-10-31

    The calibration performance of partial least squares for one response variable (PLS1) can be improved by elimination of uninformative variables. Many methods are based on so-called predictive variable properties, which are functions of various PLS-model parameters, and which may change during the variable reduction process. In these methods variable reduction is made on the variables ranked in descending order for a given variable property. The methods start with full spectrum modelling. Iteratively, until a specified number of remaining variables is reached, the variable with the smallest property value is eliminated; a new PLS model is calculated, followed by a renewed ranking of the variables. The Stepwise Variable Reduction methods using Predictive-Property-Ranked Variables are denoted as SVR-PPRV. In the existing SVR-PPRV methods the PLS model complexity is kept constant during the variable reduction process. In this study, three new SVR-PPRV methods are proposed, in which a possibility for decreasing the PLS model complexity during the variable reduction process is build in. Therefore we denote our methods as PPRVR-CAM methods (Predictive-Property-Ranked Variable Reduction with Complexity Adapted Models). The selective and predictive abilities of the new methods are investigated and tested, using the absolute PLS regression coefficients as predictive property. They were compared with two modifications of existing SVR-PPRV methods (with constant PLS model complexity) and with two reference methods: uninformative variable elimination followed by either a genetic algorithm for PLS (UVE-GA-PLS) or an interval PLS (UVE-iPLS). The performance of the methods is investigated in conjunction with two data sets from near-infrared sources (NIR) and one simulated set. The selective and predictive performances of the variable reduction methods are compared statistically using the Wilcoxon signed rank test. The three newly developed PPRVR-CAM methods were able to retain

  2. The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.

    Science.gov (United States)

    Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny

    2018-04-16

    We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.

  3. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  4. Modelling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    Science.gov (United States)

    Reichstein, M.; Rey, A.; Freibauer, A.; Tenhunen, J.; Valentini, R.; Soil Respiration Synthesis Team

    2003-04-01

    Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, inter-annual and spatial variability of soil respiration as affected by water availability, temperature and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g. leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical non-linear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and inter-site variability of soil respiration with a mean absolute error of 0.82 µmol m-2 s-1. The parameterised model exhibits the following principal properties: 1) At a relative amount of upper-layer soil water of 16% of field capacity half-maximal soil respiration rates are reached. 2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. 3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly time-scale we employed the approach by Raich et al. (2002, Global Change Biol. 8, 800-812) that used monthly precipitation and air temperature to globally predict soil respiration (T&P-model

  5. Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology

    Science.gov (United States)

    Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus

    2013-01-01

    Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.

  6. A joint spare part and maintenance inspection optimisation model using the Delay-Time concept

    International Nuclear Information System (INIS)

    Wang Wenbin

    2011-01-01

    Spare parts and maintenance are closely related logistics activities where maintenance generates the need for spare parts. When preventive maintenance is present, it may need more spare parts at one time because of the planned preventive maintenance activities. This paper considers the joint optimisation of three decision variables, e.g., the ordering quantity, ordering interval and inspection interval. The model is constructed using the well-known Delay-Time concept where the failure process is divided into a two-stage process. The objective function is the long run expected cost per unit time in terms of the three decision variables to be optimised. Here we use a block-based inspection policy where all components are inspected at the same time regardless of the ages of the components. This creates a situation that the time to failure since the immediate previous inspection is random and has to be modelled by a distribution. This time is called the forward time and a limiting but closed form of such distribution is obtained. We develop an algorithm for the optimal solution of the decision process using a combination of analytical and enumeration approaches. The model is demonstrated by a numerical example. - Highlights: → Joint optimisation of maintenance and spare part inventory. → The use of the Delay-Time concept. → Block-based inspection. → Fixed order interval but variable order quantity.

  7. Modeling of carbon sequestration in coal-beds: A variable saturated simulation

    International Nuclear Information System (INIS)

    Liu Guoxiang; Smirnov, Andrei V.

    2008-01-01

    Storage of carbon dioxide in deep coal seams is a profitable method to reduce the concentration of green house gases in the atmosphere while the methane as a byproduct can be extracted during carbon dioxide injection into the coal seam. In this procedure, the key element is to keep carbon dioxide in the coal seam without escaping for a long term. It is depended on many factors such as properties of coal basin, fracture state, phase equilibrium, etc., especially the porosity, permeability and saturation of the coal seam. In this paper, a variable saturation model was developed to predict the capacity of carbon dioxide sequestration and coal-bed methane recovery. This variable saturation model can be used to track the saturation variability with the partial pressures change caused by carbon dioxide injection. Saturation variability is a key factor to predict the capacity of carbon dioxide storage and methane recovery. Based on this variable saturation model, a set of related variables including capillary pressure, relative permeability, porosity, coupled adsorption model, concentration and temperature equations were solved. From results of the simulation, historical data agree with the variable saturation model as well as the adsorption model constructed by Langmuir equations. The Appalachian basin, as an example, modeled the carbon dioxide sequestration in this paper. The results of the study and the developed models can provide the projections for the CO 2 sequestration and methane recovery in coal-beds within different regional specifics

  8. From spatially variable streamflow to distributed hydrological models: Analysis of key modeling decisions

    Science.gov (United States)

    Fenicia, Fabrizio; Kavetski, Dmitri; Savenije, Hubert H. G.; Pfister, Laurent

    2016-02-01

    This paper explores the development and application of distributed hydrological models, focusing on the key decisions of how to discretize the landscape, which model structures to use in each landscape element, and how to link model parameters across multiple landscape elements. The case study considers the Attert catchment in Luxembourg—a 300 km2 mesoscale catchment with 10 nested subcatchments that exhibit clearly different streamflow dynamics. The research questions are investigated using conceptual models applied at hydrologic response unit (HRU) scales (1-4 HRUs) on 6 hourly time steps. Multiple model structures are hypothesized and implemented using the SUPERFLEX framework. Following calibration, space/time model transferability is tested using a split-sample approach, with evaluation criteria including streamflow prediction error metrics and hydrological signatures. Our results suggest that: (1) models using geology-based HRUs are more robust and capture the spatial variability of streamflow time series and signatures better than models using topography-based HRUs; this finding supports the hypothesis that, in the Attert, geology exerts a stronger control than topography on streamflow generation, (2) streamflow dynamics of different HRUs can be represented using distinct and remarkably simple model structures, which can be interpreted in terms of the perceived dominant hydrologic processes in each geology type, and (3) the same maximum root zone storage can be used across the three dominant geological units with no loss in model transferability; this finding suggests that the partitioning of water between streamflow and evaporation in the study area is largely independent of geology and can be used to improve model parsimony. The modeling methodology introduced in this study is general and can be used to advance our broader understanding and prediction of hydrological behavior, including the landscape characteristics that control hydrologic response, the

  9. Time-variable gravity potential components for optical clock comparisons and the definition of international time scales

    International Nuclear Information System (INIS)

    Voigt, C.; Denker, H.; Timmen, L.

    2016-01-01

    The latest generation of optical atomic clocks is approaching the level of one part in 10 18 in terms of frequency stability and uncertainty. For clock comparisons and the definition of international time scales, a relativistic redshift effect of the clock frequencies has to be taken into account at a corresponding uncertainty level of about 0.1 m 2 s -2 and 0.01 m in terms of gravity potential and height, respectively. Besides the predominant static part of the gravity potential, temporal variations must be considered in order to avoid systematic frequency shifts. Time-variable gravity potential components induced by tides and non-tidal mass redistributions are investigated with regard to the level of one part in 10 18 . The magnitudes and dominant time periods of the individual gravity potential contributions are investigated globally and for specific laboratory sites together with the related uncertainty estimates. The basics of the computation methods are presented along with the applied models, data sets and software. Solid Earth tides contribute by far the most dominant signal with a global maximum amplitude of 4.2 m 2 s -2 for the potential and a range (maximum-to-minimum) of up to 1.3 and 10.0 m 2 s -2 in terms of potential differences between specific laboratories over continental and intercontinental scales, respectively. Amplitudes of the ocean tidal loading potential can amount up to 1.25 m 2 s -2 , while the range of the potential between specific laboratories is 0.3 and 1.1 m 2 s -2 over continental and intercontinental scales, respectively. These are the only two contributors being relevant at a 10 -17 level. However, several other time-variable potential effects can particularly affect clock comparisons at the 10 -18 level. Besides solid Earth pole tides, these are non-tidal mass redistributions in the atmosphere, the oceans and the continental water storage. (authors)

  10. Long Pulse Integrator of Variable Integral Time Constant

    International Nuclear Information System (INIS)

    Wang Yong; Ji Zhenshan; Du Xiaoying; Wu Yichun; Li Shi; Luo Jiarong

    2010-01-01

    A kind of new long pulse integrator was designed based on the method of variable integral time constant and deducting integral drift by drift slope. The integral time constant can be changed by choosing different integral resistors, in order to improve the signal-to-noise ratio, and avoid output saturation; the slope of integral drift of a certain period of time can be calculated by digital signal processing, which can be used to deduct the drift of original integral signal in real time to reduce the integral drift. The tests show that this kind of long pulse integrator is good at reducing integral drift, which also can eliminate the effects of changing integral time constant. According to experiments, the integral time constant can be changed by remote control and manual adjustment of integral drift is avoided, which can improve the experiment efficiency greatly and can be used for electromagnetic measurement in Tokamak experiment. (authors)

  11. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2017-01-01

    Full Text Available A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due date. The objective is to maximize the multiplication of their rational positive cooperative profits. A division of those jobs should be negotiated to yield a reasonable cooperative profit allocation scheme acceptable to them. We propose the sufficient and necessary conditions for the problems to have positive integer solution.

  12. Measurement and modeling of diel variability of polybrominated diphenyl ethers and chlordanes in air.

    Science.gov (United States)

    Moeckel, Claudia; Macleod, Matthew; Hungerbühler, Konrad; Jones, Kevin C

    2008-05-01

    Short-term variability of concentrations of polybrominated diphenyl ethers (PBDEs) and chlordanes in air at a semirural site in England over a 5 day period is reported. Four-hour air samples were collected during a period dominated by a high pressure system that produced stable diel (24-h) patterns of meteorological conditions such as temperature and atmospheric boundary layer height. PBDE and chlordane concentrations showed clear diel variability with concentrations in the afternoon and evening being 1.9 - 2.7 times higher than in the early morning. The measurements are interpreted using a multimedia mass balance model parametrized with forcing functions representing local temperature, atmospheric boundary layer height, wind speed and hydroxyl radical concentrations. Model results indicate that reversible, temperature-controlled air-surface exchange is the primary driver of the diel concentration pattern observed for chlordanes and PBDE 28. For higher brominated PBDE congeners (47, 99 and 100), the effect of variable atmospheric mixing height in combination with irreversible deposition on aerosol particles is dominant and explains the diel patterns almost entirely. Higher concentrations of chlordanes and PBDEs in air observed at the end of the study period could be related to likely source areas using back trajectory analysis. This is the first study to clearly document diel variability in concentrations of PBDEs in air over a period of several days. Our model analysis indicates that high daytime and low nighttime concentrations of semivolatile organic chemicals can arise from different underlying driving processes, and are not necessarily evidence of reversible air-surface exchange on a 24-h time scale.

  13. Translating hydrologically-relevant variables from the ice sheet model SICOPOLIS to the Greenland Analog Project hydrologic modeling domain

    Science.gov (United States)

    Vallot, Dorothée; Applegate, Patrick; Pettersson, Rickard

    2013-04-01

    Projecting future climate and ice sheet development requires sophisticated models and extensive field observations. Given the present state of our knowledge, it is very difficult to say what will happen with certainty. Despite the ongoing increase in atmospheric greenhouse gas concentrations, the possibility that a new ice sheet might form over Scandinavia in the far distant future cannot be excluded. The growth of a new Scandinavian Ice Sheet would have important consequences for buried nuclear waste repositories. The Greenland Analogue Project, initiated by the Swedish Nuclear Fuel and Waste Management Company (SKB), is working to assess the effects of a possible future ice sheet on groundwater flow by studying a constrained domain in Western Greenland by field measurements (including deep bedrock drilling in front of the ice sheet) combined with numerical modeling. To address the needs of the GAP project, we interpolated results from an ensemble of ice sheet model runs to the smaller and more finely resolved modeling domain used in the GAP project's hydrologic modeling. Three runs have been chosen with three fairly different positive degree-day factors among those that reproduced the modern ice margin at the borehole position. The interpolated results describe changes in hydrologically-relevant variables over two time periods, 115 ka to 80 ka, and 20 ka to 1 ka. In the first of these time periods, the ice margin advances over the model domain; in the second time period, the ice margin retreats over the model domain. The spatially-and temporally dependent variables that we treated include the ice thickness, basal melting rate, surface mass balance, basal temperature, basal thermal regime (frozen or thawed), surface temperature, and basal water pressure. The melt flux is also calculated.

  14. Mediterranean climate modelling: variability and climate change scenarios

    International Nuclear Information System (INIS)

    Somot, S.

    2005-12-01

    Air-sea fluxes, open-sea deep convection and cyclo-genesis are studied in the Mediterranean with the development of a regional coupled model (AORCM). It accurately simulates these processes and their climate variabilities are quantified and studied. The regional coupling shows a significant impact on the number of winter intense cyclo-genesis as well as on associated air-sea fluxes and precipitation. A lower inter-annual variability than in non-coupled models is simulated for fluxes and deep convection. The feedbacks driving this variability are understood. The climate change response is then analysed for the 21. century with the non-coupled models: cyclo-genesis decreases, associated precipitation increases in spring and autumn and decreases in summer. Moreover, a warming and salting of the Mediterranean as well as a strong weakening of its thermohaline circulation occur. This study also concludes with the necessity of using AORCMs to assess climate change impacts on the Mediterranean. (author)

  15. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  16. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  17. Important variables in explaining real-time peak price in the independent power market of Ontario

    International Nuclear Information System (INIS)

    Rueda, I.E.A.; Marathe, A.

    2005-01-01

    This paper uses support vector machines (SVM) based learning algorithm to select important variables that help explain the real-time peak electricity price in the Ontario market. The Ontario market was opened to competition only in May 2002. Due to the limited number of observations available, finding a set of variables that can explain the independent power market of Ontario (IMO) real-time peak price is a significant challenge for the traders and analysts. The kernel regressions of the explanatory variables on the IMO real-time average peak price show that non-linear dependencies exist between the explanatory variables and the IMO price. This non-linear relationship combined with the low variable-observation ratio rule out conventional statistical analysis. Hence, we use an alternative machine learning technique to find the important explanatory variables for the IMO real-time average peak price. SVM sensitivity analysis based results find that the IMO's predispatch average peak price, the actual import peak volume, the peak load of the Ontario market and the net available supply after accounting for load (energy excess) are some of the most important variables in explaining the real-time average peak price in the Ontario electricity market. (author)

  18. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  19. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    Science.gov (United States)

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-01

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high

  20. A formal method for identifying distinct states of variability in time-varying sources: SGR A* as an example

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, L.; Witzel, G.; Ghez, A. M. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Longstaff, F. A. [UCLA Anderson School of Management, University of California, Los Angeles, CA 90095-1481 (United States)

    2014-08-10

    Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works with conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.

  1. Variable-Structure Control of a Model Glider Airplane

    Science.gov (United States)

    Waszak, Martin R.; Anderson, Mark R.

    2008-01-01

    A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

  2. The analytical description of high temperature tensile creep for cavitating materials subjected to time variable loads

    International Nuclear Information System (INIS)

    Bocek, M.

    A phenomenological cavitation model is presented by means of which the life time as well as the creep curve equations can be calculated for cavitating materials subjected to time variable tensile loads. The model precludes the proportionality between the damage A and the damage rate (dA/dt) resp. Both are connected by the life time function tau. The latter is derived from static stress rupture tests and contains the loading conditions. From this model the life fraction rule (LFR) is derived. The model is used to calculate the creep curves of cavitating materials subjected at high temperatures to non-stationary tensile loading conditions. In the present paper the following loading procedures are considered: creep at constant load F and true stress s; creep at linear load increase ((dF/dt)=const) and creep at constant load amplitude cycling (CLAC). For these loading procedures the creep equations for cavitating and non-cavitating specimens are derived. Under comparable conditions the creep rate of cavitating materials are higher than for non-cavitating ones. (author)

  3. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  4. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  5. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    Science.gov (United States)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  6. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    Science.gov (United States)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2015-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  7. Analytical Model for LLC Resonant Converter With Variable Duty-Cycle Control

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model......In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC...... converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics...

  8. The new Toyota variable valve timing and lift system

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, K.; Fuwa, N.; Yoshihara, Y. [Toyota Motor Corporation (Japan); Hori, K. [Toyota Boshoku Corporation (Japan)

    2007-07-01

    A continuously variable valve timing (duration and phase) and lift system was developed. This system was applied to the valvetrain of a new 2.0L L4 engine (3ZRFAE) for the Japanese market. The system has rocker arms, which allow continuously variable timing and lift, situated between a conventional roller-rocker arm and the camshaft, an electromotor actuator to drive it and a phase mechanism for intake and exhaust camshafts (Dual VVT-i). The rocking center of the rocker arm is stationary, and the axial linear motion of a helical spline changes the initial phase of the rocker arm which varies the timing and lift. The linear motion mechanism uses an original planetary roller screw and is driven by a brushless motor with a built-in electric control unit. Since the rocking center and the linear motion helical spline center coincide, a compact cylinder head design was possible, and the cylinder head is a common design with a conventional engine. Since the ECU controls intake valve duration and timing, a fuel economy gain of maximum 10% (depending on driving condition) is obtained by reducing light to medium load pumping losses. Also intake efficiency was maximized throughout the speed range, resulting in a power gain of 10%. Further, HC emissions were reduced due to increased air speed at low valve lift. (orig.)

  9. FAST VARIABILITY AND MILLIMETER/IR FLARES IN GRMHD MODELS OF Sgr A* FROM STRONG-FIELD GRAVITATIONAL LENSING

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal; Marrone, Daniel [Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Medeiros, Lia [Department of Physics, Broida Hall, University of California, Santa Barbara, Santa Barbara, CA 93106 (United States); Sadowski, Aleksander [MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Narayan, Ramesh, E-mail: chanc@email.arizona.edu [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2015-10-20

    We explore the variability properties of long, high-cadence general relativistic magnetohydrodynamic (GRMHD) simulations across the electromagnetic spectrum using an efficient, GPU-based radiative transfer algorithm. We focus on both standard and normal evolution (SANE) and magnetically arrested disk (MAD) simulations with parameters that successfully reproduce the time-averaged spectral properties of Sgr A* and the size of its image at 1.3 mm. We find that the SANE models produce short-timescale variability with amplitudes and power spectra that closely resemble those inferred observationally. In contrast, MAD models generate only slow variability at lower flux levels. Neither set of models shows any X-ray flares, which most likely indicates that additional physics, such as particle acceleration mechanisms, need to be incorporated into the GRMHD simulations to account for them. The SANE models show strong, short-lived millimeter/infrared (IR) flares, with short (≲1 hr) time lags between the millimeter and IR wavelengths, that arise from the combination of short-lived magnetic flux tubes and strong-field gravitational lensing near the horizon. Such events provide a natural explanation for the observed IR flares with no X-ray counterparts.

  10. A real-time crash prediction model for the ramp vicinities of urban expressways

    Directory of Open Access Journals (Sweden)

    Moinul Hossain

    2013-07-01

    Full Text Available Ramp vicinities are arguably the known black-spots on urban expressways. There, while maintaining high speed, drivers need to respond to several complex events such as maneuvering, reading road signs, route planning and maintaining safe distance from other maneuvering vehicles simultaneously which demand higher level of cognitive response to ensure safety. Therefore, any additional discomfort caused by traffic dynamics may induce driving error resulting in a crash. This manuscript presents a methodology for identifying these dynamically forming hazardous traffic conditions near the ramp vicinities with high resolution real-time traffic flow data. It separates the ramp vicinities into four zones – upstream and downstream of entrance and exit ramps, and builds four separate real-time crash prediction models. Around two year (December 2007 to October 2009 crash data as well as their matching traffic sensor data from Shibuya 3 and Shinjuku 4 expressways under the jurisdiction of Tokyo Metropolitan Expressway Company Limited have been utilized for this research. Random multinomial logit, a forest of multinomial logit models, has been used to identify the most important variables. Finally, a real-time modeling method, Bayesian belief net (BBN, has been employed to build the four models using ramp flow, flow and congestion index in the upstream and flow and speed in the downstream of the ramp location as variables. The newly proposed models could predict 50%, 42%, 43% and 55% of the future crashes with around 10% false alarm for the downstream of entrance, downstream of exit, upstream of entrance and upstream of exit ramps respectively. The models can be utilized in combination with various traffic smoothing measures such as ramp metering, variable speed limit, warning messages through variable message signs, etc. to enhance safety near the ramp vicinities.

  11. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold......The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... standard exists for diagnosing HALS the normally applied diagnostic models cannot be used. Latent class models, which have never before been used to diagnose HALS, make it possible, under certain assumptions, to: statistically evaluate the number of phenotypes, test for mixing of HALS with other processes...

  12. Time-dependent inelastic analysis of metallic media using constitutive relations with state variables

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, V; Mukherjee, S [Cornell Univ., Ithaca, N.Y. (USA)

    1977-03-01

    A computational technique in terms of stress, strain and displacement rates is presented for the solution of boundary value problems for metallic structural elements at uniform elevated temperatures subjected to time varying loads. This method can accommodate any number of constitutive relations with state variables recently proposed by other researchers to model the inelastic deformation of metallic media at elevated temperatures. Numerical solutions are obtained for several structural elements subjected to steady loads. The constitutive relations used for these numerical solutions are due to Hart. The solutions are discussed in the context of the computational scheme and Hart's theory.

  13. Using structural equation modeling to investigate relationships among ecological variables

    Science.gov (United States)

    Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

    2000-01-01

    Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0

  14. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  15. An observational and modeling study of the regional impacts of climate variability

    Science.gov (United States)

    Horton, Radley M.

    during El Nino events. Based on the results from Chapter One, the analysis is expanded in several ways in Chapter Two. To gain a more complete and statistically meaningful understanding of ENSO, a 25 year time period is used instead of a single event. To gain a fuller understanding of climate variability, additional patterns are analyzed. Finally analysis is conducted at the regional scales that are of interest to farmers and agricultural planners. Key findings are that GISS ModelE can reproduce: (1) the spatial pattern associated with two additional related modes, the Arctic Oscillation (AO) and the North Atlantic Oscillation (NAO); (2) rainfall patterns in Indonesia; and (3) dynamical features such as sea level pressure (SLP) gradients and wind in the study regions. When run in coupled mode, the same model reproduces similar modes spatially but with reduced variance and weak teleconnections. Since Chapter Two identified Western Indonesia as the region where GCMs hold the most promise for agricultural applications, in Chapter Three a finer spatial and temporal scale analysis of ENSO's effects is presented. Agricultural decision-making is also linked to ENSO's climate effects. Early rainy season precipitation and circulation, and same-season planting and harvesting dates, are shown to be sensitive to ENSO. The locus of ENSO convergence and rainfall anomalies is shown to be near the axis of rainy season establishment, defined as the 6--8 mm/day isohyet, an approximate threshold for irrigated rice cultivation. As the axis tracks south and east between October and January, so do ENSO anomalies. Circulation anomalies associated with ENSO are shown to be similar to those associated with rainfall anomalies, suggesting that long lead-time ENSO forecasts may allow more adaptation than 'wait and see' methods, with little loss of forecast skill. Additional findings include: (1) rice and corn yields are lower (higher) during dry (wet) trimesters and El Nino (La Nina) years; and (2

  16. The swan song in context: long-time-scale X-ray variability of NGC 4051

    Science.gov (United States)

    Uttley, P.; McHardy, I. M.; Papadakis, I. E.; Guainazzi, M.; Fruscione, A.

    1999-07-01

    On 1998 May 9-11, the highly variable, low-luminosity Seyfert 1 galaxy NGC 4051 was observed in an unusual low-flux state by BeppoSAX, RXTE and EUVE. We present fits of the 4-15keV RXTE spectrum and BeppoSAX MECS spectrum obtained during this observation, which are consistent with the interpretation that the source had switched off, leaving only the spectrum of pure reflection from distant cold matter. We place this result in context by showing the X-ray light curve of NGC 4051 obtained by our RXTE monitoring campaign over the past two and a half years, which shows that the low state lasted for ~150d before the May observations (implying that the reflecting material is >10^17cm from the continuum source) and forms part of a light curve showing distinct variations in long-term average flux over time-scales > months. We show that the long-time-scale component to X-ray variability is intrinsic to the primary continuum and is probably distinct from the variability at shorter time-scales. The long-time-scale component to variability maybe associated with variations in the accretion flow of matter on to the central black hole. As the source approaches the low state, the variability process becomes non-linear. NGC 4051 may represent a microcosm of all X-ray variability in radio-quiet active galactic nuclei (AGNs), displaying in a few years a variety of flux states and variability properties which more luminous AGNs may pass through on time-scales of decades to thousands of years.

  17. Decomposing the heterogeneity of depression at the person-, symptom-, and time-level : Latent variable models versus multimode principal component analysis

    NARCIS (Netherlands)

    de Vos, Stijn; Wardenaar, Klaas J.; Bos, Elisabeth H.; Wit, Ernst C.; de Jonge, Peter

    2015-01-01

    Background: Heterogeneity of psychopathological concepts such as depression hampers progress in research and clinical practice. Latent Variable Models (LVMs) have been widely used to reduce this problem by identification of more homogeneous factors or subgroups. However, heterogeneity exists at

  18. Similar star formation rate and metallicity variability time-scales drive the fundamental metallicity relation

    Science.gov (United States)

    Torrey, Paul; Vogelsberger, Mark; Hernquist, Lars; McKinnon, Ryan; Marinacci, Federico; Simcoe, Robert A.; Springel, Volker; Pillepich, Annalisa; Naiman, Jill; Pakmor, Rüdiger; Weinberger, Rainer; Nelson, Dylan; Genel, Shy

    2018-06-01

    The fundamental metallicity relation (FMR) is a postulated correlation between galaxy stellar mass, star formation rate (SFR), and gas-phase metallicity. At its core, this relation posits that offsets from the mass-metallicity relation (MZR) at a fixed stellar mass are correlated with galactic SFR. In this Letter, we use hydrodynamical simulations to quantify the time-scales over which populations of galaxies oscillate about the average SFR and metallicity values at fixed stellar mass. We find that Illustris and IllustrisTNG predict that galaxy offsets from the star formation main sequence and MZR oscillate over similar time-scales, are often anticorrelated in their evolution, evolve with the halo dynamical time, and produce a pronounced FMR. Our models indicate that galaxies oscillate about equilibrium SFR and metallicity values - set by the galaxy's stellar mass - and that SFR and metallicity offsets evolve in an anticorrelated fashion. This anticorrelated variability of the metallicity and SFR offsets drives the existence of the FMR in our models. In contrast to Illustris and IllustrisTNG, we speculate that the SFR and metallicity evolution tracks may become decoupled in galaxy formation models dominated by feedback-driven globally bursty SFR histories, which could weaken the FMR residual correlation strength. This opens the possibility of discriminating between bursty and non-bursty feedback models based on the strength and persistence of the FMR - especially at high redshift.

  19. Latent Variable Regression 4-Level Hierarchical Model Using Multisite Multiple-Cohorts Longitudinal Data. CRESST Report 801

    Science.gov (United States)

    Choi, Kilchan

    2011-01-01

    This report explores a new latent variable regression 4-level hierarchical model for monitoring school performance over time using multisite multiple-cohorts longitudinal data. This kind of data set has a 4-level hierarchical structure: time-series observation nested within students who are nested within different cohorts of students. These…

  20. Modeling and Forecasting of Water Demand in Isfahan Using Underlying Trend Concept and Time Series

    Directory of Open Access Journals (Sweden)

    H. Sadeghi

    2016-02-01

    Full Text Available Introduction: Accurate water demand modeling for the city is very important for forecasting and policies adoption related to water resources management. Thus, for future requirements of water estimation, forecasting and modeling, it is important to utilize models with little errors. Water has a special place among the basic human needs, because it not hampers human life. The importance of the issue of water management in the extraction and consumption, it is necessary as a basic need. Municipal water applications is include a variety of water demand for domestic, public, industrial and commercial. Predicting the impact of urban water demand in better planning of water resources in arid and semiarid regions are faced with water restrictions. Materials and Methods: One of the most important factors affecting the changing technological advances in production and demand functions, we must pay special attention to the layout pattern. Technology development is concerned not only technically, but also other aspects such as personal, non-economic factors (population, geographical and social factors can be analyzed. Model examined in this study, a regression model is composed of a series of structural components over time allows changed invisible accidentally. Explanatory variables technology (both crystalline and amorphous in a model according to which the material is said to be better, but because of the lack of measured variables over time can not be entered in the template. Model examined in this study, a regression model is composed of a series of structural component invisible accidentally changed over time allows. In this study, structural time series (STSM and ARMA time series models have been used to model and estimate the water demand in Isfahan. Moreover, in order to find the efficient procedure, both models have been compared to each other. The desired data in this research include water consumption in Isfahan, water price and the monthly pay

  1. Fixed transaction costs and modelling limited dependent variables

    NARCIS (Netherlands)

    Hempenius, A.L.

    1994-01-01

    As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

  2. Bayesian models of thermal and pluviometric time series in the Fucino plateau

    Directory of Open Access Journals (Sweden)

    Adriana Trabucco

    2011-09-01

    Full Text Available This work was developed within the Project Metodologie e sistemi integrati per la qualificazione di produzioni orticole del Fucino (Methodologies and integrated systems for the classification of horticultural products in the Fucino plateau, sponsored by the Italian Ministry of Education, University and Research, Strategic Projects, Law 448/97. Agro-system managing, especially if necessary to achieve high quality in speciality crops, requires knowledge of main features and intrinsic variability of climate. Statistical models may properly summarize the structure existing behind the observed variability, furthermore they may support the agronomic manager by providing the probability that meteorological events happen in a time window of interest. More than 30 years of daily values collected in four sites located on the Fucino plateau, Abruzzo region, Italy, were studied by fitting Bayesian generalized linear models to air temperature maximum /minimum and rainfall time series. Bayesian predictive distributions of climate variables supporting decision-making processes were calculated at different timescales, 5-days for temperatures and 10-days for rainfall, both to reduce computational efforts and to simplify statistical model assumptions. Technicians and field operators, even with limited statistical training, may exploit the model output by inspecting graphs and climatic profiles of the cultivated areas during decision-making processes. Realizations taken from predictive distributions may also be used as input for agro-ecological models (e.g. models of crop growth, water balance. Fitted models may be exploited to monitor climatic changes and to revise climatic profiles of interest areas, periodically updating the probability distributions of target climatic variables. For the sake of brevity, the description of results is limited to just one of the four sites, and results for all other sites are available as supplementary information.

  3. EBT time-dependent point model code: description and user's guide

    International Nuclear Information System (INIS)

    Roberts, J.F.; Uckan, N.A.

    1977-07-01

    A D-T time-dependent point model has been developed to assess the energy balance in an EBT reactor plasma. Flexibility is retained in the model to permit more recent data to be incorporated as they become available from the theoretical and experimental studies. This report includes the physics models involved, the program logic, and a description of the variables and routines used. All the files necessary for execution are listed, and the code, including a post-execution plotting routine, is discussed

  4. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  5. A modelling framework to project future climate change impacts on streamflow variability and extremes in the West River, China

    Directory of Open Access Journals (Sweden)

    Y. Fei

    2014-09-01

    Full Text Available In this study, a hydrological modelling framework was introduced to assess the climate change impacts on future river flow in the West River basin, China, especially on streamflow variability and extremes. The modelling framework includes a delta-change method with the quantile-mapping technique to construct future climate forcings on the basis of observed meteorological data and the downscaled climate model outputs. This method is able to retain the signals of extreme weather events, as projected by climate models, in the constructed future forcing scenarios. Fed with the historical and future forcing data, a large-scale hydrologic model (the Variable Infiltration Capacity model, VIC was executed for streamflow simulations and projections at daily time scales. A bootstrapping resample approach was used as an indirect alternative to test the equality of means, standard deviations and the coefficients of variation for the baseline and future streamflow time series, and to assess the future changes in flood return levels. The West River basin case study confirms that the introduced modelling framework is an efficient effective tool to quantify streamflow variability and extremes in response to future climate change.

  6. On the intra-seasonal variability within the extratropics in the ECHAM3 general circulation model

    International Nuclear Information System (INIS)

    May, W.

    1994-01-01

    First we consider the GCM's capability to reproduce the midlatitude variability on intra-seasonal time scales by a comparison with observational data (ECMWF analyses). Secondly we assess the possible influence of Sea Surface Temperatures on the intra-seasonal variability by comparing estimates obtained from different simulations performed with ECHAM3 with varying and fixed SST as boundary forcing. The intra-seasonal variability as simulated by ECHAM3 is underestimated over most of the Northern Hemisphere. While the contributions of the high-frequency transient fluctuations are reasonably well captured by the model, ECHAM3 fails to reproduce the observed level of low-frequency intra-seasonal variability. This is mainly due to the underestimation of the variability caused by the ultra-long planetary waves in the Northern Hemisphere midlatitudes by the model. In the Southern Hemisphere midlatitudes, on the other hand, the intra-seasonal variability as simulated by ECHAM3 is generally underestimated in the area north of about 50 southern latitude, but overestimated at higher latitudes. This is the case for the contributions of the high-frequency and the low-frequency transient fluctuations as well. Further, the model indicates a strong tendency for zonal symmetry, in particular with respect to the high-frequency transient fluctuations. While the two sets of simulations with varying and fixed Sea Surface Temepratures as boundary forcing reveal only small regional differences in the Southern Hemisphere, there is a strong response to be found in the Northern Hemisphere. The contributions of the high-frequency transient fluctuations to the intra-seasonal variability are generally stronger in the simulations with fixed SST. Further, the Pacific storm track is shifted slightly poleward in this set of simulations. For the low-frequency intra-seasonal variability the model gives a strong, but regional response to the interannual variations of the SST. (orig.)

  7. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  8. Variable population exposure and distributed travel speeds in least-cost tsunami evacuation modelling

    Science.gov (United States)

    Fraser, Stuart A.; Wood, Nathan J.; Johnston, David A.; Leonard, Graham S.; Greening, Paul D.; Rossetto, Tiziana

    2014-01-01

    Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate departure or a common evacuation departure time for all exposed population. Here, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The method is demonstrated for hypothetical local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios. However, it requires detailed exposure data, which may preclude its use in many situations.

  9. Leisure time physical activity, screen time, social background, and environmental variables in adolescents.

    Science.gov (United States)

    Mota, Jorge; Gomes, Helena; Almeida, Mariana; Ribeiro, José Carlos; Santos, Maria Paula

    2007-08-01

    This study analyzes the relationships between leisure time physical activity (LTPA), sedentary behaviors, socioeconomic status, and perceived environmental variables. The sample comprised 815 girls and 746 boys. In girls, non-LTPA participants reported significantly more screen time. Girls with safety concerns were more likely to be in the non-LTPA group (OR = 0.60) and those who agreed with the importance of aesthetics were more likely to be in the active-LTPA group (OR = 1.59). In girls, an increase of 1 hr of TV watching was a significant predictor of non-LTPA (OR = 0.38). LTPA for girls, but not for boys, seems to be influenced by certain modifiable factors of the built environment, as well as by time watching TV.

  10. A variable-order time-dependent neutron transport method for nuclear reactor kinetics using analytically-integrated space-time characteristics

    International Nuclear Information System (INIS)

    Hoffman, A. J.; Lee, J. C.

    2013-01-01

    A new time-dependent neutron transport method based on the method of characteristics (MOC) has been developed. Whereas most spatial kinetics methods treat time dependence through temporal discretization, this new method treats time dependence by defining the characteristics to span space and time. In this implementation regions are defined in space-time where the thickness of the region in time fulfills an analogous role to the time step in discretized methods. The time dependence of the local source is approximated using a truncated Taylor series expansion with high order derivatives approximated using backward differences, permitting the solution of the resulting space-time characteristic equation. To avoid a drastic increase in computational expense and memory requirements due to solving many discrete characteristics in the space-time planes, the temporal variation of the boundary source is similarly approximated. This allows the characteristics in the space-time plane to be represented analytically rather than discretely, resulting in an algorithm comparable in implementation and expense to one that arises from conventional time integration techniques. Furthermore, by defining the boundary flux time derivative in terms of the preceding local source time derivative and boundary flux time derivative, the need to store angularly-dependent data is avoided without approximating the angular dependence of the angular flux time derivative. The accuracy of this method is assessed through implementation in the neutron transport code DeCART. The method is employed with variable-order local source representation to model a TWIGL transient. The results demonstrate that this method is accurate and more efficient than the discretized method. (authors)

  11. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  12. Sparse modeling of spatial environmental variables associated with asthma.

    Science.gov (United States)

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  14. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  15. Incorporating Satellite Time-Series Data into Modeling

    Science.gov (United States)

    Gregg, Watson

    2008-01-01

    In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.

  16. Modeling Time-Dependent Behavior of Concrete Affected by Alkali Silica Reaction in Variable Environmental Conditions.

    Science.gov (United States)

    Alnaggar, Mohammed; Di Luzio, Giovanni; Cusatis, Gianluca

    2017-04-28

    Alkali Silica Reaction (ASR) is known to be a serious problem for concrete worldwide, especially in high humidity and high temperature regions. ASR is a slow process that develops over years to decades and it is influenced by changes in environmental and loading conditions of the structure. The problem becomes even more complicated if one recognizes that other phenomena like creep and shrinkage are coupled with ASR. This results in synergistic mechanisms that can not be easily understood without a comprehensive computational model. In this paper, coupling between creep, shrinkage and ASR is modeled within the Lattice Discrete Particle Model (LDPM) framework. In order to achieve this, a multi-physics formulation is used to compute the evolution of temperature, humidity, cement hydration, and ASR in both space and time, which is then used within physics-based formulations of cracking, creep and shrinkage. The overall model is calibrated and validated on the basis of experimental data available in the literature. Results show that even during free expansions (zero macroscopic stress), a significant degree of coupling exists because ASR induced expansions are relaxed by meso-scale creep driven by self-equilibriated stresses at the meso-scale. This explains and highlights the importance of considering ASR and other time dependent aging and deterioration phenomena at an appropriate length scale in coupled modeling approaches.

  17. Analytical and Numerical solutions of a nonlinear alcoholism model via variable-order fractional differential equations

    Science.gov (United States)

    Gómez-Aguilar, J. F.

    2018-03-01

    In this paper, we analyze an alcoholism model which involves the impact of Twitter via Liouville-Caputo and Atangana-Baleanu-Caputo fractional derivatives with constant- and variable-order. Two fractional mathematical models are considered, with and without delay. Special solutions using an iterative scheme via Laplace and Sumudu transform were obtained. We studied the uniqueness and existence of the solutions employing the fixed point postulate. The generalized model with variable-order was solved numerically via the Adams method and the Adams-Bashforth-Moulton scheme. Stability and convergence of the numerical solutions were presented in details. Numerical examples of the approximate solutions are provided to show that the numerical methods are computationally efficient. Therefore, by including both the fractional derivatives and finite time delays in the alcoholism model studied, we believe that we have established a more complete and more realistic indicator of alcoholism model and affect the spread of the drinking.

  18. Model atmospheres with periodic shocks. [pulsations and mass loss in variable stars

    Science.gov (United States)

    Bowen, G. H.

    1989-01-01

    The pulsation of a long-period variable star generates shock waves which dramatically affect the structure of the star's atmosphere and produce conditions that lead to rapid mass loss. Numerical modeling of atmospheres with periodic shocks is being pursued to study the processes involved and the evolutionary consequences for the stars. It is characteristic of these complex dynamical systems that most effects result from the interaction of various time-dependent processes.

  19. Geochemical Modeling Of F Area Seepage Basin Composition And Variability

    International Nuclear Information System (INIS)

    Millings, M.; Denham, M.; Looney, B.

    2012-01-01

    chemistry and variability included: (1) the nature or chemistry of the waste streams, (2) the open system of the basins, and (3) duration of discharge of the waste stream types. Mixing models of the archetype waste streams indicated that the overall basin system would likely remain acidic much of the time. Only an extended periods of predominantly alkaline waste discharge (e.g., >70% alkaline waste) would dramatically alter the average pH of wastewater entering the basins. Short term and long term variability were evaluated by performing multiple stepwise modeling runs to calculate the oscillation of bulk chemistry in the basins in response to short term variations in waste stream chemistry. Short term (1/2 month and 1 month) oscillations in the waste stream types only affected the chemistry in Basin 1; little variation was observed in Basin 2 and 3. As the largest basin, Basin 3 is considered the primary source to the groundwater. Modeling showed that the fluctuation in chemistry of the waste streams is not directly representative of the source term to the groundwater (i.e. Basin 3). The sequence of receiving basins and the large volume of water in Basin 3 'smooth' or nullify the short term variability in waste stream composition. As part of this study, a technically-based 'charge-balanced' nominal source term chemistry was developed for Basin 3 for a narrow range of pH (2.7 to 3.4). An example is also provided of how these data could be used to quantify uncertainty over the long term variations in waste stream chemistry and hence, Basin 3 chemistry.

  20. Application of several variable-valve-timing concepts to an LHR engine

    Science.gov (United States)

    Morel, T.; Keribar, R.; Sawlivala, M.; Hakim, N.

    1987-01-01

    The paper discusses advantages provided by electronically controlled hydraulically activated valves (ECVs) when applied to low heat rejection (LHR) engines. The ECV concept provides additional engine control flexibility by allowing for a variable valve timing as a function of speed and load, or for a given transient condition. The results of a study carried out to assess the benefits that this flexibility can offer to an LHR engine indicated that, when judged on the benefits to BSFC, volumetric efficiency, and peak firing pressure, ECVs would provide only modest benefits in comparison to conventional valve profiles. It is noted, however, that once installed on the engine, the ECVs would permit a whole range of certain more sophisticated variable valve timing strategies not otherwise possible, such as high compression cranking, engine braking, cylinder cutouts, and volumetric efficiency timing with engine speed.

  1. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  2. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  3. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  4. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  5. Drag coefficient Variability and Thermospheric models

    Science.gov (United States)

    Moe, Kenneth

    Satellite drag coefficients depend upon a variety of factors: The shape of the satellite, its altitude, the eccentricity of its orbit, the temperature and mean molecular mass of the ambient atmosphere, and the time in the sunspot cycle. At altitudes where the mean free path of the atmospheric molecules is large compared to the dimensions of the satellite, the drag coefficients can be determined from the theory of free-molecule flow. The dependence on altitude is caused by the concentration of atomic oxygen which plays an important role by its ability to adsorb on the satellite surface and thereby affect the energy loss of molecules striking the surface. The eccentricity of the orbit determines the satellite velocity at perigee, and therefore the energy of the incident molecules relative to the energy of adsorption of atomic oxygen atoms on the surface. The temperature of the ambient atmosphere determines the extent to which the random thermal motion of the molecules influences the momentum transfer to the satellite. The time in the sunspot cycle affects the ambient temperature as well as the concentration of atomic oxygen at a particular altitude. Tables and graphs will be used to illustrate the variability of drag coefficients. Before there were any measurements of gas-surface interactions in orbit, Izakov and Cook independently made an excellent estimate that the drag coefficient of satellites of compact shape would be 2.2. That numerical value, independent of altitude, was used by Jacchia to construct his model from the early measurements of satellite drag. Consequently, there is an altitude dependent bias in the model. From the sparce orbital experiments that have been done, we know that the molecules which strike satellite surfaces rebound in a diffuse angular distribution with an energy loss given by the energy accommodation coefficient. As more evidence accumulates on the energy loss, more realistic drag coefficients are being calculated. These improved drag

  6. Predicting the outbreak of hand, foot, and mouth disease in Nanjing, China: a time-series model based on weather variability

    Science.gov (United States)

    Liu, Sijun; Chen, Jiaping; Wang, Jianming; Wu, Zhuchao; Wu, Weihua; Xu, Zhiwei; Hu, Wenbiao; Xu, Fei; Tong, Shilu; Shen, Hongbing

    2017-10-01

    Hand, foot, and mouth disease (HFMD) is a significant public health issue in China and an accurate prediction of epidemic can improve the effectiveness of HFMD control. This study aims to develop a weather-based forecasting model for HFMD using the information on climatic variables and HFMD surveillance in Nanjing, China. Daily data on HFMD cases and meteorological variables between 2010 and 2015 were acquired from the Nanjing Center for Disease Control and Prevention, and China Meteorological Data Sharing Service System, respectively. A multivariate seasonal autoregressive integrated moving average (SARIMA) model was developed and validated by dividing HFMD infection data into two datasets: the data from 2010 to 2013 were used to construct a model and those from 2014 to 2015 were used to validate it. Moreover, we used weekly prediction for the data between 1 January 2014 and 31 December 2015 and leave-1-week-out prediction was used to validate the performance of model prediction. SARIMA (2,0,0)52 associated with the average temperature at lag of 1 week appeared to be the best model (R 2 = 0.936, BIC = 8.465), which also showed non-significant autocorrelations in the residuals of the model. In the validation of the constructed model, the predicted values matched the observed values reasonably well between 2014 and 2015. There was a high agreement rate between the predicted values and the observed values (sensitivity 80%, specificity 96.63%). This study suggests that the SARIMA model with average temperature could be used as an important tool for early detection and prediction of HFMD outbreaks in Nanjing, China.

  7. Modeling multivariate time series on manifolds with skew radial basis functions.

    Science.gov (United States)

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  8. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  9. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  10. Intraindividual variability in reaction time predicts cognitive outcomes 5 years later.

    Science.gov (United States)

    Bielak, Allison A M; Hultsch, David F; Strauss, Esther; Macdonald, Stuart W S; Hunter, Michael A

    2010-11-01

    Building on results suggesting that intraindividual variability in reaction time (inconsistency) is highly sensitive to even subtle changes in cognitive ability, this study addressed the capacity of inconsistency to predict change in cognitive status (i.e., cognitive impairment, no dementia [CIND] classification) and attrition 5 years later. Two hundred twelve community-dwelling older adults, initially aged 64-92 years, remained in the study after 5 years. Inconsistency was calculated from baseline reaction time performance. Participants were assigned to groups on the basis of their fluctuations in CIND classification over time. Logistic and Cox regressions were used. Baseline inconsistency significantly distinguished among those who remained or transitioned into CIND over the 5 years and those who were consistently intact (e.g., stable intact vs. stable CIND, Wald (1) = 7.91, p < .01, Exp(β) = 1.49). Average level of inconsistency over time was also predictive of study attrition, for example, Wald (1) = 11.31, p < .01, Exp(β) = 1.24. For both outcomes, greater inconsistency was associated with a greater likelihood of being in a maladaptive group 5 years later. Variability based on moderately cognitively challenging tasks appeared to be particularly sensitive to longitudinal changes in cognitive ability. Mean rate of responding was a comparable predictor of change in most instances, but individuals were at greater relative risk of being in a maladaptive outcome group if they were more inconsistent rather than if they were slower in responding. Implications for the potential utility of intraindividual variability in reaction time as an early marker of cognitive decline are discussed. (c) 2010 APA, all rights reserved

  11. A novel methodology improves reservoir characterization models using geologic fuzzy variables

    Energy Technology Data Exchange (ETDEWEB)

    Soto B, Rodolfo [DIGITOIL, Maracaibo (Venezuela); Soto O, David A. [Texas A and M University, College Station, TX (United States)

    2004-07-01

    One of the research projects carried out in Cusiana field to explain its rapid decline during the last years was to get better permeability models. The reservoir of this field has a complex layered system that it is not easy to model using conventional methods. The new technique included the development of porosity and permeability maps from cored wells following the same trend of the sand depositions for each facie or layer according to the sedimentary facie and the depositional system models. Then, we used fuzzy logic to reproduce those maps in three dimensions as geologic fuzzy variables. After multivariate statistical and factor analyses, we found independence and a good correlation coefficient between the geologic fuzzy variables and core permeability and porosity. This means, the geologic fuzzy variable could explain the fabric, the grain size and the pore geometry of the reservoir rock trough the field. Finally, we developed a neural network permeability model using porosity, gamma ray and the geologic fuzzy variable as input variables. This model has a cross-correlation coefficient of 0.873 and average absolute error of 33% compared with the actual model with a correlation coefficient of 0.511 and absolute error greater than 250%. We tested different methodologies, but this new one showed dramatically be a promiser way to get better permeability models. The use of the models have had a high impact in the explanation of well performance and workovers, and reservoir simulation models. (author)

  12. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  13. Efficient Business Service Consumption by Customization with Variability Modelling

    Directory of Open Access Journals (Sweden)

    Michael Stollberg

    2010-07-01

    Full Text Available The establishment of service orientation in industry determines the need for efficient engineering technologies that properly support the whole life cycle of service provision and consumption. A central challenge is adequate support for the efficient employment of komplex services in their individual application context. This becomes particularly important for large-scale enterprise technologies where generic services are designed for reuse in several business scenarios. In this article we complement our work regarding Service Variability Modelling presented in a previous publication. There we presented an approach for the customization of services for individual application contexts by creating simplified variants, based on model-driven variability management. That work presents our revised service variability metamodel, new features of the variability tools and an applicability study, which reveals that substantial improvements on the efficiency of standard business service consumption under both usability and economic aspects can be achieved.

  14. Time-dependence in relativistic collisionless shocks: theory of the variable

    Energy Technology Data Exchange (ETDEWEB)

    Spitkovsky, A

    2004-02-05

    We describe results from time-dependent numerical modeling of the collisionless reverse shock terminating the pulsar wind in the Crab Nebula. We treat the upstream relativistic wind as composed of ions and electron-positron plasma embedded in a toroidal magnetic field, flowing radially outward from the pulsar in a sector around the rotational equator. The relativistic cyclotron instability of the ion gyrational orbit downstream of the leading shock in the electron-positron pairs launches outward propagating magnetosonic waves. Because of the fresh supply of ions crossing the shock, this time-dependent process achieves a limit-cycle, in which the waves are launched with periodicity on the order of the ion Larmor time. Compressions in the magnetic field and pair density associated with these waves, as well as their propagation speed, semi-quantitatively reproduce the behavior of the wisp and ring features described in recent observations obtained using the Hubble Space Telescope and the Chandra X-Ray Observatory. By selecting the parameters of the ion orbits to fit the spatial separation of the wisps, we predict the period of time variability of the wisps that is consistent with the data. When coupled with a mechanism for non-thermal acceleration of the pairs, the compressions in the magnetic field and plasma density associated with the optical wisp structure naturally account for the location of X-ray features in the Crab. We also discuss the origin of the high energy ions and their acceleration in the equatorial current sheet of the pulsar wind.

  15. Reaction Time Variability in Children With ADHD Symptoms and/or Dyslexia

    OpenAIRE

    Gooch, Debbie; Snowling, Margaret J.; Hulme, Charles

    2012-01-01

    Reaction time (RT) variability on a Stop Signal task was examined among children with attention deficit hyperactivity disorder (ADHD) symptoms and/or dyslexia in comparison to typically developing (TD) controls. Children’s go-trial RTs were analyzed using a novel ex-Gaussian method. Children with ADHD symptoms had increased variability in the fast but not the slow portions of their RT distributions compared to those without ADHD symptoms. The RT distributions of children with d...

  16. Modeling and Simulation of Variable Mass, Flexible Structures

    Science.gov (United States)

    Tobbe, Patrick A.; Matras, Alex L.; Wilson, Heath E.

    2009-01-01

    The advent of the new Ares I launch vehicle has highlighted the need for advanced dynamic analysis tools for variable mass, flexible structures. This system is composed of interconnected flexible stages or components undergoing rapid mass depletion through the consumption of solid or liquid propellant. In addition to large rigid body configuration changes, the system simultaneously experiences elastic deformations. In most applications, the elastic deformations are compatible with linear strain-displacement relationships and are typically modeled using the assumed modes technique. The deformation of the system is approximated through the linear combination of the products of spatial shape functions and generalized time coordinates. Spatial shape functions are traditionally composed of normal mode shapes of the system or even constraint modes and static deformations derived from finite element models of the system. Equations of motion for systems undergoing coupled large rigid body motion and elastic deformation have previously been derived through a number of techniques [1]. However, in these derivations, the mode shapes or spatial shape functions of the system components were considered constant. But with the Ares I vehicle, the structural characteristics of the system are changing with the mass of the system. Previous approaches to solving this problem involve periodic updates to the spatial shape functions or interpolation between shape functions based on system mass or elapsed mission time. These solutions often introduce misleading or even unstable numerical transients into the system. Plus, interpolation on a shape function is not intuitive. This paper presents an approach in which the shape functions are held constant and operate on the changing mass and stiffness matrices of the vehicle components. Each vehicle stage or component finite element model is broken into dry structure and propellant models. A library of propellant models is used to describe the

  17. A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints

    Directory of Open Access Journals (Sweden)

    L. Kantha

    2016-01-01

    Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.

  18. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  19. Methods for assessment of climate variability and climate changes in different time-space scales

    International Nuclear Information System (INIS)

    Lobanov, V.; Lobanova, H.

    2004-01-01

    Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same

  20. Improving evapotranspiration in a land surface model using biophysical variables derived from MSG/SEVIRI satellite

    Directory of Open Access Journals (Sweden)

    N. Ghilain

    2012-08-01

    Full Text Available Monitoring evapotranspiration over land is highly dependent on the surface state and vegetation dynamics. Data from spaceborn platforms are desirable to complement estimations from land surface models. The success of daily evapotranspiration monitoring at continental scale relies on the availability, quality and continuity of such data. The biophysical variables derived from SEVIRI on board the geostationary satellite Meteosat Second Generation (MSG and distributed by the Satellite Application Facility on Land surface Analysis (LSA-SAF are particularly interesting for such applications, as they aimed at providing continuous and consistent daily time series in near-real time over Africa, Europe and South America. In this paper, we compare them to monthly vegetation parameters from a database commonly used in numerical weather predictions (ECOCLIMAP-I, showing the benefits of the new daily products in detecting the spatial and temporal (seasonal and inter-annual variability of the vegetation, especially relevant over Africa. We propose a method to handle Leaf Area Index (LAI and Fractional Vegetation Cover (FVC products for evapotranspiration monitoring with a land surface model at 3–5 km spatial resolution. The method is conceived to be applicable for near-real time processes at continental scale and relies on the use of a land cover map. We assess the impact of using LSA-SAF biophysical variables compared to ECOCLIMAP-I on evapotranspiration estimated by the land surface model H-TESSEL. Comparison with in-situ observations in Europe and Africa shows an improved estimation of the evapotranspiration, especially in semi-arid climates. Finally, the impact on the land surface modelled evapotranspiration is compared over a north–south transect with a large gradient of vegetation and climate in Western Africa using LSA-SAF radiation forcing derived from remote sensing. Differences are highlighted. An evaluation against remote sensing derived land

  1. Fractional derivatives of constant and variable orders applied to anomalous relaxation models in heat transfer problems

    Directory of Open Access Journals (Sweden)

    Yang Xiao-Jun

    2017-01-01

    Full Text Available In this paper, we address a class of the fractional derivatives of constant and variable orders for the first time. Fractional-order relaxation equations of constants and variable orders in the sense of Caputo type are modeled from mathematical view of point. The comparative results of the anomalous relaxation among the various fractional derivatives are also given. They are very efficient in description of the complex phenomenon arising in heat transfer.

  2. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    Science.gov (United States)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide

  3. Prediction of autoignition in a lifted methane/air flame using an unsteady flamelet/progress variable model

    Energy Technology Data Exchange (ETDEWEB)

    Ihme, Matthias; See, Yee Chee [Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)

    2010-10-15

    An unsteady flamelet/progress variable (UFPV) model has been developed for the prediction of autoignition in turbulent lifted flames. The model is a consistent extension to the steady flamelet/progress variable (SFPV) approach, and employs an unsteady flamelet formulation to describe the transient evolution of all thermochemical quantities during the flame ignition process. In this UFPV model, all thermochemical quantities are parameterized by mixture fraction, reaction progress parameter, and stoichiometric scalar dissipation rate, eliminating the explicit dependence on a flamelet time scale. An a priori study is performed to analyze critical modeling assumptions that are associated with the population of the flamelet state space. For application to LES, the UFPV model is combined with a presumed PDF closure to account for subgrid contributions of mixture fraction and reaction progress variable. The model was applied in LES of a lifted methane/air flame. Additional calculations were performed to quantify the interaction between turbulence and chemistry a posteriori. Simulation results obtained from these calculations are compared with experimental data. Compared to the SFPV results, the unsteady flamelet/progress variable model captures the autoignition process, and good agreement with measurements is obtained for mixture fraction, temperature, and species mass fractions. From the analysis of scatter data and mixture fraction-conditional results it is shown that the turbulence/chemistry interaction delays the ignition process towards lower values of scalar dissipation rate, and a significantly larger region in the flamelet state space is occupied during the ignition process. (author)

  4. Mixing times towards demographic equilibrium in insect populations with temperature variable age structures.

    Science.gov (United States)

    Damos, Petros

    2015-08-01

    In this study, we use entropy related mixing rate modules to measure the effects of temperature on insect population stability and demographic breakdown. The uncertainty in the age of the mother of a randomly chosen newborn, and how it is moved after a finite act of time steps, is modeled using a stochastic transformation of the Leslie matrix. Age classes are represented as a cycle graph and its transitions towards the stable age distribution are brought forth as an exact Markov chain. The dynamics of divergence, from a non equilibrium state towards equilibrium, are evaluated using the Kolmogorov-Sinai entropy. Moreover, Kullback-Leibler distance is applied as information-theoretic measure to estimate exact mixing times of age transitions probabilities towards equilibrium. Using empirically data, we show that on the initial conditions and simulated projection's trough time, that population entropy can effectively be applied to detect demographic variability towards equilibrium under different temperature conditions. Changes in entropy are correlated with the fluctuations of the insect population decay rates (i.e. demographic stability towards equilibrium). Moreover, shorter mixing times are directly linked to lower entropy rates and vice versa. This may be linked to the properties of the insect model system, which in contrast to warm blooded animals has the ability to greatly change its metabolic and demographic rates. Moreover, population entropy and the related distance measures that are applied, provide a means to measure these rates. The current results and model projections provide clear biological evidence why dynamic population entropy may be useful to measure population stability. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Strong-field Breit–Wheeler pair production in two consecutive laser pulses with variable time delay

    Directory of Open Access Journals (Sweden)

    Martin J.A. Jansen

    2017-03-01

    Full Text Available Photoproduction of electron–positron pairs by the strong-field Breit–Wheeler process in an intense laser field is studied. The laser field is assumed to consist of two consecutive short pulses, with a variable time delay in between. By numerical calculations within the framework of scalar quantum electrodynamics, we demonstrate that the time delay exerts a strong impact on the pair-creation probability. For the case when both pulses are identical, the effect is traced back to the relative quantum phase of the interfering S-matrix amplitudes and explained within a simplified analytical model. Conversely, when the two laser pulses differ from each other, the pair-creation probability depends not only on the time delay but, in general, also on the temporal order of the pulses.

  6. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  7. Hydroclimate variability in Scandinavia over the last millennium - insights from a climate model-proxy data comparison

    Science.gov (United States)

    Seftigen, Kristina; Goosse, Hugues; Klein, Francois; Chen, Deliang

    2017-12-01

    The integration of climate proxy information with general circulation model (GCM) results offers considerable potential for deriving greater understanding of the mechanisms underlying climate variability, as well as unique opportunities for out-of-sample evaluations of model performance. In this study, we combine insights from a new tree-ring hydroclimate reconstruction from Scandinavia with projections from a suite of forced transient simulations of the last millennium and historical intervals from the CMIP5 and PMIP3 archives. Model simulations and proxy reconstruction data are found to broadly agree on the modes of atmospheric variability that produce droughts-pluvials in the region. Despite these dynamical similarities, large differences between simulated and reconstructed hydroclimate time series remain. We find that the GCM-simulated multi-decadal and/or longer hydroclimate variability is systematically smaller than the proxy-based estimates, whereas the dominance of GCM-simulated high-frequency components of variability is not reflected in the proxy record. Furthermore, the paleoclimate evidence indicates in-phase coherencies between regional hydroclimate and temperature on decadal timescales, i.e., sustained wet periods have often been concurrent with warm periods and vice versa. The CMIP5-PMIP3 archive suggests, however, out-of-phase coherencies between the two variables in the last millennium. The lack of adequate understanding of mechanisms linking temperature and moisture supply on longer timescales has serious implications for attribution and prediction of regional hydroclimate changes. Our findings stress the need for further paleoclimate data-model intercomparison efforts to expand our understanding of the dynamics of hydroclimate variability and change, to enhance our ability to evaluate climate models, and to provide a more comprehensive view of future drought and pluvial risks.

  8. Multidecadal Variability in Surface Albedo Feedback Across CMIP5 Models

    Science.gov (United States)

    Schneider, Adam; Flanner, Mark; Perket, Justin

    2018-02-01

    Previous studies quantify surface albedo feedback (SAF) in climate change, but few assess its variability on decadal time scales. Using the Coupled Model Intercomparison Project Version 5 (CMIP5) multimodel ensemble data set, we calculate time evolving SAF in multiple decades from surface albedo and temperature linear regressions. Results are meaningful when temperature change exceeds 0.5 K. Decadal-scale SAF is strongly correlated with century-scale SAF during the 21st century. Throughout the 21st century, multimodel ensemble mean SAF increases from 0.37 to 0.42 W m-2 K-1. These results suggest that models' mean decadal-scale SAFs are good estimates of their century-scale SAFs if there is at least 0.5 K temperature change. Persistent SAF into the late 21st century indicates ongoing capacity for Arctic albedo decline despite there being less sea ice. If the CMIP5 multimodel ensemble results are representative of the Earth, we cannot expect decreasing Arctic sea ice extent to suppress SAF in the 21st century.

  9. Variability of African Farming Systems from Phenological Analysis of NDVI Time Series

    Science.gov (United States)

    Vrieling, Anton; deBeurs, K. M.; Brown, Molly E.

    2011-01-01

    Food security exists when people have access to sufficient, safe and nutritious food at all times to meet their dietary needs. The natural resource base is one of the many factors affecting food security. Its variability and decline creates problems for local food production. In this study we characterize for sub-Saharan Africa vegetation phenology and assess variability and trends of phenological indicators based on NDVI time series from 1982 to 2006. We focus on cumulated NDVI over the season (cumNDVI) which is a proxy for net primary productivity. Results are aggregated at the level of major farming systems, while determining also spatial variability within farming systems. High temporal variability of cumNDVI occurs in semiarid and subhumid regions. The results show a large area of positive cumNDVI trends between Senegal and South Sudan. These correspond to positive CRU rainfall trends found and relate to recovery after the 1980's droughts. We find significant negative cumNDVI trends near the south-coast of West Africa (Guinea coast) and in Tanzania. For each farming system, causes of change and variability are discussed based on available literature (Appendix A). Although food security comprises more than the local natural resource base, our results can perform an input for food security analysis by identifying zones of high variability or downward trends. Farming systems are found to be a useful level of analysis. Diversity and trends found within farming system boundaries underline that farming systems are dynamic.

  10. Assessing the effects of pharmacological agents on respiratory dynamics using time-series modeling.

    Science.gov (United States)

    Wong, Kin Foon Kevin; Gong, Jen J; Cotten, Joseph F; Solt, Ken; Brown, Emery N

    2013-04-01

    Developing quantitative descriptions of how stimulant and depressant drugs affect the respiratory system is an important focus in medical research. Respiratory variables-respiratory rate, tidal volume, and end tidal carbon dioxide-have prominent temporal dynamics that make it inappropriate to use standard hypothesis-testing methods that assume independent observations to assess the effects of these pharmacological agents. We present a polynomial signal plus autoregressive noise model for analysis of continuously recorded respiratory variables. We use a cyclic descent algorithm to maximize the conditional log likelihood of the parameters and the corrected Akaike's information criterion to choose simultaneously the orders of the polynomial and the autoregressive models. In an analysis of respiratory rates recorded from anesthetized rats before and after administration of the respiratory stimulant methylphenidate, we use the model to construct within-animal z-tests of the drug effect that take account of the time-varying nature of the mean respiratory rate and the serial dependence in rate measurements. We correct for the effect of model lack-of-fit on our inferences by also computing bootstrap confidence intervals for the average difference in respiratory rate pre- and postmethylphenidate treatment. Our time-series modeling quantifies within each animal the substantial increase in mean respiratory rate and respiratory dynamics following methylphenidate administration. This paradigm can be readily adapted to analyze the dynamics of other respiratory variables before and after pharmacologic treatments.

  11. A Nonlinear Mixed Effects Approach for Modeling the Cell-To-Cell Variability of Mig1 Dynamics in Yeast.

    Directory of Open Access Journals (Sweden)

    Joachim Almquist

    Full Text Available The last decade has seen a rapid development of experimental techniques that allow data collection from individual cells. These techniques have enabled the discovery and characterization of variability within a population of genetically identical cells. Nonlinear mixed effects (NLME modeling is an established framework for studying variability between individuals in a population, frequently used in pharmacokinetics and pharmacodynamics, but its potential for studies of cell-to-cell variability in molecular cell biology is yet to be exploited. Here we take advantage of this novel application of NLME modeling to study cell-to-cell variability in the dynamic behavior of the yeast transcription repressor Mig1. In particular, we investigate a recently discovered phenomenon where Mig1 during a short and transient period exits the nucleus when cells experience a shift from high to intermediate levels of extracellular glucose. A phenomenological model based on ordinary differential equations describing the transient dynamics of nuclear Mig1 is introduced, and according to the NLME methodology the parameters of this model are in turn modeled by a multivariate probability distribution. Using time-lapse microscopy data from nearly 200 cells, we estimate this parameter distribution according to the approach of maximizing the population likelihood. Based on the estimated distribution, parameter values for individual cells are furthermore characterized and the resulting Mig1 dynamics are compared to the single cell times-series data. The proposed NLME framework is also compared to the intuitive but limited standard two-stage (STS approach. We demonstrate that the latter may overestimate variabilities by up to almost five fold. Finally, Monte Carlo simulations of the inferred population model are used to predict the distribution of key characteristics of the Mig1 transient response. We find that with decreasing levels of post-shift glucose, the transient

  12. Analysis of Modal Travel Time Variability Due to Mesoscale Ocean Structure

    National Research Council Canada - National Science Library

    Smith, Amy

    1997-01-01

    .... First, for an open ocean environment away from strong boundary currents, the effects of randomly phased linear baroclinic Rossby waves on acoustic travel time are shown to produce a variable overall...

  13. An artificial pancreas provided a novel model of blood glucose level variability in beagles.

    Science.gov (United States)

    Munekage, Masaya; Yatabe, Tomoaki; Kitagawa, Hiroyuki; Takezaki, Yuka; Tamura, Takahiko; Namikawa, Tsutomu; Hanazaki, Kazuhiro

    2015-12-01

    Although the effects on prognosis of blood glucose level variability have gained increasing attention, it is unclear whether blood glucose level variability itself or the manifestation of pathological conditions that worsen prognosis. Then, previous reports have not been published on variability models of perioperative blood glucose levels. The aim of this study is to establish a novel variability model of blood glucose concentration using an artificial pancreas. We maintained six healthy, male beagles. After anesthesia induction, a 20-G venous catheter was inserted in the right femoral vein and an artificial pancreas (STG-22, Nikkiso Co. Ltd., Tokyo, Japan) was connected for continuous blood glucose monitoring and glucose management. After achieving muscle relaxation, total pancreatectomy was performed. After 1 h of stabilization, automatic blood glucose control was initiated using the artificial pancreas. Blood glucose level varied for 8 h, alternating between the target blood glucose values of 170 and 70 mg/dL. Eight hours later, the experiment was concluded. Total pancreatectomy was performed for 62 ± 13 min. Blood glucose swings were achieved 9.8 ± 2.3 times. The average blood glucose level was 128.1 ± 5.1 mg/dL with an SD of 44.6 ± 3.9 mg/dL. The potassium levels after stabilization and at the end of the experiment were 3.5 ± 0.3 and 3.1 ± 0.5 mmol/L, respectively. In conclusion, the results of the present study demonstrated that an artificial pancreas contributed to the establishment of a novel variability model of blood glucose levels in beagles.

  14. Internal variability in a regional climate model over West Africa

    Energy Technology Data Exchange (ETDEWEB)

    Vanvyve, Emilie; Ypersele, Jean-Pascal van [Universite catholique de Louvain, Institut d' astronomie et de geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hall, Nicholas [Laboratoire d' Etudes en Geophysique et Oceanographie Spatiales/Centre National d' Etudes Spatiales, Toulouse Cedex 9 (France); Messager, Christophe [University of Leeds, Institute for Atmospheric Science, Environment, School of Earth and Environment, Leeds (United Kingdom); Leroux, Stephanie [Universite Joseph Fourier, Laboratoire d' etude des Transferts en Hydrologie et Environnement, BP53, Grenoble Cedex 9 (France)

    2008-02-15

    Sensitivity studies with regional climate models are often performed on the basis of a few simulations for which the difference is analysed and the statistical significance is often taken for granted. In this study we present some simple measures of the confidence limits for these types of experiments by analysing the internal variability of a regional climate model run over West Africa. Two 1-year long simulations, differing only in their initial conditions, are compared. The difference between the two runs gives a measure of the internal variability of the model and an indication of which timescales are reliable for analysis. The results are analysed for a range of timescales and spatial scales, and quantitative measures of the confidence limits for regional model simulations are diagnosed for a selection of study areas for rainfall, low level temperature and wind. As the averaging period or spatial scale is increased, the signal due to internal variability gets smaller and confidence in the simulations increases. This occurs more rapidly for variations in precipitation, which appear essentially random, than for dynamical variables, which show some organisation on larger scales. (orig.)

  15. Prediction of hourly PM2.5 using a space-time support vector regression model

    Science.gov (United States)

    Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang

    2018-05-01

    Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.

  16. Impulsive synchronization and parameter mismatch of the three-variable autocatalator model

    International Nuclear Information System (INIS)

    Li, Yang; Liao, Xiaofeng; Li, Chuandong; Huang, Tingwen; Yang, Degang

    2007-01-01

    The synchronization problems of the three-variable autocatalator model via impulsive control approach are investigated; several theorems on the stability of impulsive control systems are also investigated. These theorems are then used to find the conditions under which the three-variable autocatalator model can be asymptotically controlled to the equilibrium point. This Letter derives some sufficient conditions for the stabilization and synchronization of a three-variable autocatalator model via impulsive control with varying impulsive intervals. Furthermore, we address the chaos quasi-synchronization in the presence of single-parameter mismatch. To illustrate the effectiveness of the new scheme, several numerical examples are given

  17. Updating prognosis in primary biliary cirrhosis using a time-dependent Cox regression model. PBC1 and PBC2 trial groups

    DEFF Research Database (Denmark)

    Christensen, E; Altman, D G; Neuberger, J

    1993-01-01

    BACKGROUND: The precision of current prognostic models in primary biliary cirrhosis (PBC) is rather low, partly because they are based on data from just one time during the course of the disease. The aim of this study was to design a new, more precise prognostic model by incorporating follow......-up data in the development of the model. METHODS: We have performed Cox regression analyses with time-dependent variables in 237 PBC patients followed up regularly for up to 11 years. The validity of the obtained models was tested by comparing predicted and observed survival in 147 independent PBC...... patients followed for up to 6 years. RESULTS: In the obtained model the following time-dependent variables independently indicated a poor prognosis: high bilirubin, low albumin, ascites, gastrointestinal bleeding, and old age. When including histological variables, cirrhosis, central cholestasis, and low...

  18. Modelling Packet Departure Times using a Known PDF

    Directory of Open Access Journals (Sweden)

    Stanislav Klucik

    2014-01-01

    Full Text Available This paper deals with IPTV traffic source modelling and describes a packet generator based on a known probability density function which is measured and formed from a histogram. Histogram based probability density functions destroy an amount of information, because classes used to form the histogram often cover significantly more events than one. In this work, we propose an algorithm to generate far more output states of random variable X than the input probability distribution function is made from. In this generator is assumed that all IPTV packets of the same video stream are the same length. Therefore, only packet times are generated. These times are generated using the measured normalized histogram that is converted to a cumulative distribution function which acts as a finite number of states that can be addressed. To address these states we use an ON/OFF model that is driven by an uniform random number generator in (0, 1. When a state is chosen then the resulting value is equal to a histogram class. To raise the number of possible output states of the random variable X, we propose to use an uniform random number generator that generates numbers within the range of the chosen histogram class. This second uniform random number generator assures that the number of output states is far more larger than the number of histogram classes.

  19. Variable cycle control model for intersection based on multi-source information

    Science.gov (United States)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  20. Hidden Markov latent variable models with multivariate longitudinal data.

    Science.gov (United States)

    Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

    2017-03-01

    Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.

  1. Quadratic time dependent Hamiltonians and separation of variables

    Science.gov (United States)

    Anzaldo-Meneses, A.

    2017-06-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green's function is obtained and a comparison with the classical Hamilton-Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei-Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü-Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems.

  2. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    International Nuclear Information System (INIS)

    Lovejoy, S.; Lima, M. I. P. de

    2015-01-01

    Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time

  3. Simulation of the Quantity, Variability, and Timing of Streamflow in the Dennys River Basin, Maine, by Use of a Precipitation-Runoff Watershed Model

    Science.gov (United States)

    Dudley, Robert W.

    2008-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Maine Department of Marine Resources Bureau of Sea Run Fisheries and Habitat, began a study in 2004 to characterize the quantity, variability, and timing of streamflow in the Dennys River. The study included a synoptic summary of historical streamflow data at a long-term streamflow gage, collecting data from an additional four short-term streamflow gages, and the development and evaluation of a distributed-parameter watershed model for the Dennys River Basin. The watershed model used in this investigation was the USGS Precipitation-Runoff Modeling System (PRMS). The Geographic Information System (GIS) Weasel was used to delineate the Dennys River Basin and subbasins and derive parameters for their physical geographic features. Calibration of the models used in this investigation involved a four-step procedure in which model output was evaluated against four calibration data sets using computed objective functions for solar radiation, potential evapotranspiration, annual and seasonal water budgets, and daily streamflows. The calibration procedure involved thousands of model runs and was carried out using the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The SCE method reliably produces satisfactory solutions for large, complex optimization problems. The primary calibration effort went into the Dennys main stem watershed model. Calibrated parameter values obtained for the Dennys main stem model were transferred to the Cathance Stream model, and a similar four-step SCE calibration procedure was performed; this effort was undertaken to determine the potential to transfer modeling information to a nearby basin in the same region. The calibrated Dennys main stem watershed model performed with Nash-Sutcliffe efficiency (NSE) statistic values for the calibration period and evaluation period of 0.79 and 0

  4. Modeling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    Science.gov (United States)

    Reichstein, Markus; Rey, Ana; Freibauer, Annette; Tenhunen, John; Valentini, Riccardo; Banza, Joao; Casals, Pere; Cheng, Yufu; Grünzweig, Jose M.; Irvine, James; Joffre, Richard; Law, Beverly E.; Loustau, Denis; Miglietta, Franco; Oechel, Walter; Ourcival, Jean-Marc; Pereira, Joao S.; Peressotti, Alessandro; Ponti, Francesca; Qi, Ye; Rambal, Serge; Rayment, Mark; Romanya, Joan; Rossi, Federica; Tedeschi, Vanessa; Tirone, Giampiero; Xu, Ming; Yakir, Dan

    2003-12-01

    Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, interannual and spatial variability of soil respiration as affected by water availability, temperature, and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g., leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical nonlinear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content, and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and intersite variability of soil respiration with a mean absolute error of 0.82 μmol m-2 s-1. The parameterized model exhibits the following principal properties: (1) At a relative amount of upper-layer soil water of 16% of field capacity, half-maximal soil respiration rates are reached. (2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. (3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly timescale, we employed the approach by [2002] that used monthly precipitation and air temperature to globally predict soil respiration (T&P model). While this model was able to

  5. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    Science.gov (United States)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high

  6. Variable Fidelity Aeroelastic Toolkit - Structural Model, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

  7. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  8. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  9. Time lags in biological models

    CERN Document Server

    MacDonald, Norman

    1978-01-01

    In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...

  10. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    Science.gov (United States)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical

  11. The application of convolution-based statistical model on the electrical breakdown time delay distributions in neon

    International Nuclear Information System (INIS)

    Maluckov, Cedomir A.; Karamarkovic, Jugoslav P.; Radovic, Miodrag K.; Pejovic, Momcilo M.

    2004-01-01

    The convolution-based model of the electrical breakdown time delay distribution is applied for statistical analysis of experimental results obtained in neon-filled diode tube at 6.5 mbar. At first, the numerical breakdown time delay density distributions are obtained by stochastic modeling as the sum of two independent random variables, the electrical breakdown statistical time delay with exponential, and discharge formative time with Gaussian distribution. Then, the single characteristic breakdown time delay distribution is obtained as the convolution of these two random variables with previously determined parameters. These distributions show good correspondence with the experimental distributions, obtained on the basis of 1000 successive and independent measurements. The shape of distributions is investigated, and corresponding skewness and kurtosis are plotted, in order to follow the transition from Gaussian to exponential distribution

  12. Modeling the impact of forecast-based regime switches on macroeconomic time series

    NARCIS (Netherlands)

    K. Bel (Koen); R. Paap (Richard)

    2013-01-01

    textabstractForecasts of key macroeconomic variables may lead to policy changes of governments, central banks and other economic agents. Policy changes in turn lead to structural changes in macroeconomic time series models. To describe this phenomenon we introduce a logistic smooth transition

  13. Hydrologic scales, cloud variability, remote sensing, and models: Implications for forecasting snowmelt and streamflow

    Science.gov (United States)

    Simpson, James J.; Dettinger, M.D.; Gehrke, F.; McIntire, T.J.; Hufford, Gary L.

    2004-01-01

    Accurate prediction of available water supply from snowmelt is needed if the myriad of human, environmental, agricultural, and industrial demands for water are to be satisfied, especially given legislatively imposed conditions on its allocation. Robust retrievals of hydrologic basin model variables (e.g., insolation or areal extent of snow cover) provide several advantages over the current operational use of either point measurements or parameterizations to help to meet this requirement. Insolation can be provided at hourly time scales (or better if needed during rapid melt events associated with flooding) and at 1-km spatial resolution. These satellite-based retrievals incorporate the effects of highly variable (both in space and time) and unpredictable cloud cover on estimates of insolation. The insolation estimates are further adjusted for the effects of basin topography using a high-resolution digital elevation model prior to model input. Simulations of two Sierra Nevada rivers in the snowmelt seasons of 1998 and 1999 indicate that even the simplest improvements in modeled insolation can improve snowmelt simulations, with 10%-20% reductions in root-mean-square errors. Direct retrieval of the areal extent of snow cover may mitigate the need to rely entirely on internal calculations of this variable, a reliance that can yield large errors that are difficult to correct until long after the season is complete and that often leads to persistent underestimates or overestimates of the volumes of the water to operational reservoirs. Agencies responsible for accurately predicting available water resources from the melt of snowpack [e.g., both federal (the National Weather Service River Forecast Centers) and state (the California Department of Water Resources)] can benefit by incorporating concepts developed herein into their operational forecasting procedures. ?? 2004 American Meteorological Society.

  14. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    Science.gov (United States)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  15. New insights into soil temperature time series modeling: linear or nonlinear?

    Science.gov (United States)

    Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram

    2018-03-01

    Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and

  16. Drivers and potential predictability of summer time North Atlantic polar front jet variability

    Science.gov (United States)

    Hall, Richard J.; Jones, Julie M.; Hanna, Edward; Scaife, Adam A.; Erdélyi, Róbert

    2017-06-01

    The variability of the North Atlantic polar front jet stream is crucial in determining summer weather around the North Atlantic basin. Recent extreme summers in western Europe and North America have highlighted the need for greater understanding of this variability, in order to aid seasonal forecasting and mitigate societal, environmental and economic impacts. Here we find that simple linear regression and composite models based on a few predictable factors are able to explain up to 35 % of summertime jet stream speed and latitude variability from 1955 onwards. Sea surface temperature forcings impact predominantly on jet speed, whereas solar and cryospheric forcings appear to influence jet latitude. The cryospheric associations come from the previous autumn, suggesting the survival of an ice-induced signal through the winter season, whereas solar influences lead jet variability by a few years. Regression models covering the earlier part of the twentieth century are much less effective, presumably due to decreased availability of data, and increased uncertainty in observational reanalyses. Wavelet coherence analysis identifies that associations fluctuate over the study period but it is not clear whether this is just internal variability or genuine non-stationarity. Finally we identify areas for future research.

  17. Numerical counting ratemeter with variable time constant and integrated circuits

    International Nuclear Information System (INIS)

    Kaiser, J.; Fuan, J.

    1967-01-01

    We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr

  18. Control of fast non linear systems - application to a turbo charged SI engine with variable valve timing; controle des systemes rapides non lineaires - application au moteur a allumage commande turbocompresse a distribution variable

    Energy Technology Data Exchange (ETDEWEB)

    Colin, G.

    2006-10-15

    Spark ignition engine control has become a major issue for the compliance with emissions legislation while ensuring driving comfort. Engine down-sizing is one of the promising ways to reduce fuel consumption and resulting CO{sub 2} emissions. Combining several existing technologies such as supercharging and variable valve actuation, down-sizing is a typical example of the problems encountered in Spark Ignited (SI) engine control: nonlinear systems with saturation of actuators; numerous major physical phenomena not measurable; limited computing time; control objectives (consumption, pollution, performance) often competing. A methodology of modelling and model-based control (internal model and predictive control) for these systems is also proposed and applied to the air path of the down-sized engine. Models, physicals and generics, are built to estimate in-cylinder air mass, residual burned gases mass and air scavenged mass from the intake to the exhaust. The complete and generic engine torque control architecture for the turbo-charged SI engine with variable cam-shaft timing was tested in simulation and experimentally (on engine and vehicle). These tests show that new possibilities are offered in order to decrease pollutant emissions and optimize engine efficiency. (author)

  19. Sensitivity of Variables with Time for Degraded RC Shear Wall with Low Steel Ratio under Seismic Load

    International Nuclear Information System (INIS)

    Park, Jun Hee; Choun, Young Sun; Choi, In Kil

    2011-01-01

    Various factors lead to the degradation of reinforced concrete (RC) shear wall over time. The steel section loss, concrete spalling and strength of material have been considered for the structural analysis of degraded shear wall. When all variables with respect to degradation are considered for probabilistic evaluation of degraded shear wall, many of time and effort were demanded. Therefore, it is required to define important variables related to structural behavior for effectively conducting probabilistic seismic analysis of structures with age-related degradation. In this study, variables were defined by applying the function of time to consider degradation with time. Importance of variables with time on the seismic response was investigated by conducting sensitivity analysis

  20. Influences of variables on ship collision probability in a Bayesian belief network model

    International Nuclear Information System (INIS)

    Hänninen, Maria; Kujala, Pentti

    2012-01-01

    The influences of the variables in a Bayesian belief network model for estimating the role of human factors on ship collision probability in the Gulf of Finland are studied for discovering the variables with the largest influences and for examining the validity of the network. The change in the so-called causation probability is examined while observing each state of the network variables and by utilizing sensitivity and mutual information analyses. Changing course in an encounter situation is the most influential variable in the model, followed by variables such as the Officer of the Watch's action, situation assessment, danger detection, personal condition and incapacitation. The least influential variables are the other distractions on bridge, the bridge view, maintenance routines and the officer's fatigue. In general, the methods are found to agree on the order of the model variables although some disagreements arise due to slightly dissimilar approaches to the concept of variable influence. The relative values and the ranking of variables based on the values are discovered to be more valuable than the actual numerical values themselves. Although the most influential variables seem to be plausible, there are some discrepancies between the indicated influences in the model and literature. Thus, improvements are suggested to the network.

  1. Uncertainty and variability in computational and mathematical models of cardiac physiology.

    Science.gov (United States)

    Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

    2016-12-01

    Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for

  2. Turbine modelling for real time simulators

    International Nuclear Information System (INIS)

    Oliveira Barroso, A.C. de; Araujo Filho, F. de

    1992-01-01

    A model for vapor turbines and its peripherals has been developed. All the important variables have been included and emphasis has been given for the computational efficiency to obtain a model able to simulate all the modeled equipment. (A.C.A.S.)

  3. Modeling and Design Optimization of Variable-Speed Wind Turbine Systems

    Directory of Open Access Journals (Sweden)

    Ulas Eminoglu

    2014-01-01

    Full Text Available As a result of the increase in energy demand and government subsidies, the usage of wind turbine system (WTS has increased dramatically. Due to the higher energy production of a variable-speed WTS as compared to a fixed-speed WTS, the demand for this type of WTS has increased. In this study, a new method for the calculation of the power output of variable-speed WTSs is proposed. The proposed model is developed from the S-type curve used for population growth, and is only a function of the rated power and rated (nominal wind speed. It has the advantage of enabling the user to calculate power output without using the rotor power coefficient. Additionally, by using the developed model, a mathematical method to calculate the value of rated wind speed in terms of turbine capacity factor and the scale parameter of the Weibull distribution for a given wind site is also proposed. Design optimization studies are performed by using the particle swarm optimization (PSO and artificial bee colony (ABC algorithms, which are applied into this type of problem for the first time. Different sites such as Northern and Mediterranean sites of Europe have been studied. Analyses for various parameters are also presented in order to evaluate the effect of rated wind speed on the design parameters and produced energy cost. Results show that proposed models are reliable and very useful for modeling and optimization of WTSs design by taking into account the wind potential of the region. Results also show that the PSO algorithm has better performance than the ABC algorithm for this type of problem.

  4. Combining meteorological radar and network of rain gauges data for space–time model development

    OpenAIRE

    Pastoriza, Vicente; Núñez Fernández, Adolfo; Machado, Fernando; Mariño, Perfecto; Pérez Fontán, Fernando; Fiebig, Uwe-Carsten

    2009-01-01

    Technological developments and the trend to go higher and higher in frequency give rise to the need for true space–time rain field models for testing the dynamics of fade countermeasures. There are many models that capture the spatial correlation of rain fields. Worth mentioning are those models based on cell ensembles. However, the rain rate fields created in this way need the introduction of the time variable to reproduce their dynamics. In this paper, we have concentrated on ad...

  5. Time-variable gravity fields derived from GPS tracking of Swarm

    Czech Academy of Sciences Publication Activity Database

    Bezděk, Aleš; Sebera, Josef; da Encarnacao, J.T.; Klokočník, Jaroslav

    2016-01-01

    Roč. 205, č. 3 (2016), s. 1665-1669 ISSN 0956-540X R&D Projects: GA MŠk LG14026; GA ČR GA13-36843S Institutional support: RVO:67985815 Keywords : satellite geodesy * time variable gravity * global change from geodesy Subject RIV: DD - Geochemistry Impact factor: 2.414, year: 2016

  6. Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis

    Science.gov (United States)

    Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.

    2015-06-01

    This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.

  7. On diffusion processes with variable drift rates as models for decision making during learning

    International Nuclear Information System (INIS)

    Eckhoff, P; Holmes, P; Law, C; Connolly, P M; Gold, J I

    2008-01-01

    We investigate Ornstein-Uhlenbeck and diffusion processes with variable drift rates as models of evidence accumulation in a visual discrimination task. We derive power-law and exponential drift-rate models and characterize how parameters of these models affect the psychometric function describing performance accuracy as a function of stimulus strength and viewing time. We fit the models to psychophysical data from monkeys learning the task to identify parameters that best capture performance as it improves with training. The most informative parameter was the overall drift rate describing the signal-to-noise ratio of the sensory evidence used to form the decision, which increased steadily with training. In contrast, secondary parameters describing the time course of the drift during motion viewing did not exhibit steady trends. The results indicate that relatively simple versions of the diffusion model can fit behavior over the course of training, thereby giving a quantitative account of learning effects on the underlying decision process

  8. On-line scheme for parameter estimation of nonlinear lithium ion battery equivalent circuit models using the simplified refined instrumental variable method for a modified Wiener continuous-time model

    International Nuclear Information System (INIS)

    Allafi, Walid; Uddin, Kotub; Zhang, Cheng; Mazuir Raja Ahsan Sha, Raja; Marco, James

    2017-01-01

    Highlights: •Off-line estimation approach for continuous-time domain for non-invertible function. •Model reformulated to multi-input-single-output; nonlinearity described by sigmoid. •Method directly estimates parameters of nonlinear ECM from the measured-data. •Iterative on-line technique leads to smoother convergence. •The model is validated off-line and on-line using NCA battery. -- Abstract: The accuracy of identifying the parameters of models describing lithium ion batteries (LIBs) in typical battery management system (BMS) applications is critical to the estimation of key states such as the state of charge (SoC) and state of health (SoH). In applications such as electric vehicles (EVs) where LIBs are subjected to highly demanding cycles of operation and varying environmental conditions leading to non-trivial interactions of ageing stress factors, this identification is more challenging. This paper proposes an algorithm that directly estimates the parameters of a nonlinear battery model from measured input and output data in the continuous time-domain. The simplified refined instrumental variable method is extended to estimate the parameters of a Wiener model where there is no requirement for the nonlinear function to be invertible. To account for nonlinear battery dynamics, in this paper, the typical linear equivalent circuit model (ECM) is enhanced by a block-oriented Wiener configuration where the nonlinear memoryless block following the typical ECM is defined to be a sigmoid static nonlinearity. The nonlinear Weiner model is reformulated in the form of a multi-input, single-output linear model. This linear form allows the parameters of the nonlinear model to be estimated using any linear estimator such as the well-established least squares (LS) algorithm. In this paper, the recursive least square (RLS) method is adopted for online parameter estimation. The approach was validated on experimental data measured from an 18650-type Graphite

  9. Stochastic modeling of neurobiological time series: Power, coherence, Granger causality, and separation of evoked responses from ongoing activity

    Science.gov (United States)

    Chen, Yonghong; Bressler, Steven L.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Mingzhou

    2006-06-01

    In this article we consider the stochastic modeling of neurobiological time series from cognitive experiments. Our starting point is the variable-signal-plus-ongoing-activity model. From this model a differentially variable component analysis strategy is developed from a Bayesian perspective to estimate event-related signals on a single trial basis. After subtracting out the event-related signal from recorded single trial time series, the residual ongoing activity is treated as a piecewise stationary stochastic process and analyzed by an adaptive multivariate autoregressive modeling strategy which yields power, coherence, and Granger causality spectra. Results from applying these methods to local field potential recordings from monkeys performing cognitive tasks are presented.

  10. Use of Machine Learning Techniques for Iidentification of Robust Teleconnections to East African Rainfall Variability in Observations and Models

    Science.gov (United States)

    Roberts, J. Brent; Robertson, Franklin R.; Funk, Chris

    2014-01-01

    Providing advance warning of East African rainfall variations is a particular focus of several groups including those participating in the Famine Early Warming Systems Network. Both seasonal and long-term model projections of climate variability are being used to examine the societal impacts of hydrometeorological variability on seasonal to interannual and longer time scales. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of both seasonal and climate model projections to develop downscaled scenarios for using in impact modeling. The utility of these projections is reliant on the ability of current models to capture the embedded relationships between East African rainfall and evolving forcing within the coupled ocean-atmosphere-land climate system. Previous studies have posited relationships between variations in El Niño, the Walker circulation, Pacific decadal variability (PDV), and anthropogenic forcing. This study applies machine learning methods (e.g. clustering, probabilistic graphical model, nonlinear PCA) to observational datasets in an attempt to expose the importance of local and remote forcing mechanisms of East African rainfall variability. The ability of the NASA Goddard Earth Observing System (GEOS5) coupled model to capture the associated relationships will be evaluated using Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations.

  11. Transient modelling of a natural circulation loop under variable pressure

    International Nuclear Information System (INIS)

    Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian; Instituto de Engenharia Nuclear

    2017-01-01

    The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

  12. Transient modelling of a natural circulation loop under variable pressure

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian, E-mail: avianna@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br, E-mail: faccini@ien.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Termo-Hidraulica Experimental

    2017-07-01

    The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

  13. Century-scale variability in global annual runoff examined using a water balance model

    Science.gov (United States)

    McCabe, G.J.; Wolock, D.M.

    2011-01-01

    A monthly water balance model (WB model) is used with CRUTS2.1 monthly temperature and precipitation data to generate time series of monthly runoff for all land areas of the globe for the period 1905 through 2002. Even though annual precipitation accounts for most of the temporal and spatial variability in annual runoff, increases in temperature have had an increasingly negative effect on annual runoff after 1980. Although the effects of increasing temperature on runoff became more apparent after 1980, the relative magnitude of these effects are small compared to the effects of precipitation on global runoff. ?? 2010 Royal Meteorological Society.

  14. Discrete random walk models for space-time fractional diffusion

    International Nuclear Information System (INIS)

    Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo

    2002-01-01

    A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation

  15. AMOC decadal variability in Earth system models: Mechanisms and climate impacts

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, Alexey [Yale Univ., New Haven, CT (United States)

    2017-09-06

    This is the final report for the project titled "AMOC decadal variability in Earth system models: Mechanisms and climate impacts". The central goal of this one-year research project was to understand the mechanisms of decadal and multi-decadal variability of the Atlantic Meridional Overturning Circulation (AMOC) within a hierarchy of climate models ranging from realistic ocean GCMs to Earth system models. The AMOC is a key element of ocean circulation responsible for oceanic transport of heat from low to high latitudes and controlling, to a large extent, climate variations in the North Atlantic. The questions of the AMOC stability, variability and predictability, directly relevant to the questions of climate predictability, were at the center of the research work.

  16. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    variable G and bulk viscosity in Lyra geometry. Exact solutions for ... a comparative study of Robertson–Walker models with a constant deceleration .... where H is defined as H =(˙A/A)+(1/3)( ˙B/B) and β0,H0 are representing present values of β ...

  17. Climate model assessment of changes in winter-spring streamflow timing over North America

    Science.gov (United States)

    Kam, Jonghun; Knutson, Thomas R.; Milly, Paul C. D.

    2018-01-01

    Over regions where snow-melt runoff substantially contributes to winter-spring streamflows, warming can accelerate snow melt and reduce dry-season streamflows. However, conclusive detection of changes and attribution to anthropogenic forcing is hindered by brevity of observational records, model uncertainty, and uncertainty concerning internal variability. In this study, a detection/attribution of changes in mid-latitude North American winter-spring streamflow timing is examined using nine global climate models under multiple forcing scenarios. In this study, robustness across models, start/end dates for trends, and assumptions about internal variability is evaluated. Marginal evidence for an emerging detectable anthropogenic influence (according to four or five of nine models) is found in the north-central U.S., where winter-spring streamflows have been coming earlier. Weaker indications of detectable anthropogenic influence (three of nine models) are found in the mountainous western U.S./southwestern Canada and in extreme northeastern U.S./Canadian Maritimes. In the former region, a recent shift toward later streamflows has rendered the full-record trend toward earlier streamflows only marginally significant, with possible implications for previously published climate change detection findings for streamflow timing in this region. In the latter region, no forced model shows as large a shift toward earlier streamflow timing as the detectable observed shift. In other (including warm, snow-free) regions, observed trends are typically not detectable, although in the U.S. central plains we find detectable delays in streamflow, which are inconsistent with forced model experiments.

  18. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  19. Annual Research Review: Reaction time variability in ADHD and autism spectrum disorders: measurement and mechanisms of a proposed trans-diagnostic phenotype

    Science.gov (United States)

    Karalunas, Sarah L.; Geurts, Hilde M.; Konrad, Kerstin; Bender, Stephan; Nigg, Joel T.

    2014-01-01

    Background Intraindividual variability in reaction time (RT) has received extensive discussion as an indicator of cognitive performance, a putative intermediate phenotype of many clinical disorders, and a possible trans-diagnostic phenotype that may elucidate shared risk factors for mechanisms of psychiatric illnesses. Scope and Methodology Using the examples of attention deficit hyperactivity disorder (ADHD) and autism spectrum disorders (ASD), we discuss RT variability. We first present a new meta-analysis of RT variability in ASD with and without comorbid ADHD. We then discuss potential mechanisms that may account for RT variability and statistical models that disentangle the cognitive processes affecting RTs. We then report a second meta-analysis comparing ADHD and non-ADHD children on diffusion model parameters. We consider how findings inform the search for neural correlates of RT variability. Findings Results suggest that RT variability is increased in ASD only when children with comorbid ADHD are included in the sample. Furthermore, RT variability in ADHD is explained by moderate to large increases (d = 0.63–0.99) in the ex-Gaussian parameter τ and the diffusion parameter drift rate, as well as by smaller differences (d = 0.32) in the diffusion parameter of nondecision time. The former may suggest problems in state regulation or arousal and difficulty detecting signal from noise, whereas the latter may reflect contributions from deficits in motor organization or output. The neuroimaging literature converges with this multicomponent interpretation and also highlights the role of top-down control circuits. Conclusion We underscore the importance of considering the interactions between top-down control, state regulation (e.g. arousal), and motor preparation when interpreting RT variability and conclude that decomposition of the RT signal provides superior interpretive power and suggests mechanisms convergent with those implicated using other cognitive

  20. Model Predictive Control of a Nonlinear System with Known Scheduling Variable

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Model predictive control (MPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Consequently...... the control problem of the nonlinear system is simplied into a quadratic programming. Wind turbine is chosen as the case study and we choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  1. A geometric model for magnetizable bodies with internal variables

    Directory of Open Access Journals (Sweden)

    Restuccia, L

    2005-11-01

    Full Text Available In a geometrical framework for thermo-elasticity of continua with internal variables we consider a model of magnetizable media previously discussed and investigated by Maugin. We assume as state variables the magnetization together with its space gradient, subjected to evolution equations depending on both internal and external magnetic fields. We calculate the entropy function and necessary conditions for its existence.

  2. Meta-modeling of occupancy variables and analysis of their impact on energy outcomes of office buildings

    International Nuclear Information System (INIS)

    Wang, Qinpeng; Augenbroe, Godfried; Kim, Ji-Hyun; Gu, Li

    2016-01-01

    Highlights: • A meta-analysis framework for a stochastic characterization of occupancy variables. • Sensitivity ranking of occupancy variability against all other sources of uncertainty. • Sensitivity of occupant presence for building energy consumption is low. • Accurate mean knowledge is sufficient for predicting building energy consumption. • Prediction of peak demand behavior requires stochastic occupancy modeling. - Abstract: Occupants interact with buildings in various ways via their presence (passive effects) and control actions (active effects). Therefore, understanding the influence of occupants is essential if we are to evaluate the performance of a building. In this paper, we model the mean profiles and variability of occupancy variables (presence and actions) separately. We will use a multi-variate Gaussian distribution to generate mean profiles of occupancy variables, while the variability will be represented by a multi-dimensional time series model, within a framework for a meta-analysis that synthesizes occupancy data gathered from a pool of buildings. We then discuss variants of occupancy models with respect to various outcomes of interest such as HVAC energy consumption and peak demand behavior via a sensitivity analysis. Results show that our approach is able to generate stochastic occupancy profiles, requiring minimum additional input from the energy modeler other than standard diversity profiles. Along with the meta-analysis, we enable the generalization of previous research results and statistical inferences to choose occupancy variables for future buildings. The sensitivity analysis shows that for aggregated building energy consumption, occupant presence has a smaller impact compared to lighting and appliance usage. Specifically, being accumulatively 55% wrong with regard to presence, only translates to 2% error in aggregated cooling energy in July and 3.6% error in heating energy in January. Such a finding redirects focus to the

  3. Time-series modeling: applications to long-term finfish monitoring data

    International Nuclear Information System (INIS)

    Bireley, L.E.

    1985-01-01

    The growing concern and awareness that developed during the 1970's over the effects that industry had on the environment caused the electric utility industry in particular to develop monitoring programs. These programs generate long-term series of data that are not very amenable to classical normal-theory statistical analysis. The monitoring data collected from three finfish programs (impingement, trawl and seine) at the Millstone Nuclear Power Station were typical of such series and thus were used to develop methodology that used the full extent of the information in the series. The basis of the methodology was classic Box-Jenkins time-series modeling; however, the models also included deterministic components that involved flow, season and time as predictor variables. Time entered into the models as harmonic regression terms. Of the 32 models fitted to finfish catch data, 19 were found to account for more than 70% of the historical variation. The models were than used to forecast finfish catches a year in advance and comparisons were made to actual data. Usually the confidence intervals associated with the forecasts encompassed most of the observed data. The technique can provide the basis for intervention analysis in future impact assessments

  4. Examples of EOS Variables as compared to the UMM-Var Data Model

    Science.gov (United States)

    Cantrell, Simon; Lynnes, Chris

    2016-01-01

    In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

  5. A deep X-ray view of the bare AGN Ark 120. III. X-ray timing analysis and multiwavelength variability

    Science.gov (United States)

    Lobban, A. P.; Porquet, D.; Reeves, J. N.; Markowitz, A.; Nardini, E.; Grosso, N.

    2018-03-01

    We present the spectral/timing properties of the bare Seyfert galaxy Ark 120 through a deep ˜420 ks XMM-Newton campaign plus recent NuSTAR observations and a ˜6-month Swift monitoring campaign. We investigate the spectral decomposition through fractional rms, covariance and difference spectra, finding the mid- to long-time-scale (˜day-year) variability to be dominated by a relatively smooth, steep component, peaking in the soft X-ray band. Additionally, we find evidence for variable Fe K emission redward of the Fe Kα core on long time-scales, consistent with previous findings. We detect a clearly defined power spectrum which we model with a power law with a slope of α ˜ 1.9. By extending the power spectrum to lower frequencies through the inclusion of Swift and Rossi X-ray Timing Explorer data, we find tentative evidence of a high-frequency break, consistent with existing scaling relations. We also explore frequency-dependent Fourier time lags, detecting a negative (`soft') lag for the first time in this source with the 0.3-1 keV band lagging behind the 1-4 keV band with a time delay, τ, of ˜900 s. Finally, we analyse the variability in the optical and ultraviolet (UV) bands using the Optical/UV Monitor onboard XMM-Newton and the Ultra-Violet/Optical Telescope onboard Swift and search for time-dependent correlations between the optical/UV/X-ray bands. We find tentative evidence for the U-band emission lagging behind the X-rays with a time delay of τ = 2.4 ± 1.8 d, which we discuss in the context of disc reprocessing.

  6. Discrete-time bidirectional associative memory neural networks with variable delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde; Ho, Daniel W.C.

    2005-01-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks

  7. Discrete-time bidirectional associative memory neural networks with variable delays

    Science.gov (United States)

    Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.

    2005-02-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.

  8. Speech-discrimination scores modeled as a binomial variable.

    Science.gov (United States)

    Thornton, A R; Raffin, M J

    1978-09-01

    Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.

  9. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  10. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  11. Role of environmental variables on radon concentration in soil

    International Nuclear Information System (INIS)

    Climent, H.; Bakalowicz, M.; Monnin, M.

    1998-01-01

    In the frame of an European project, radon concentrations in soil and measurements of environmental variables such as the nature of the soil or climatic variables were monitored. The data have been analysed by time-series analysis methods, i.e. Correlation and Spectrum Analysis, to point out relations between radon concentrations and some environmental variables. This approach is a compromise between direct observation and modelling. The observation of the rough time series is unable to point out the relation between radon concentrations and an environmental variable because of the overlapping of the influences of several variables, and the time delay induced by the medium. The Cross Spectrum function between the time series of radon and of an environmental variable describes the nature of the relation and gives the response time in the case of a cause to effect relation. It requires the only hypothesis that the environmental variable is the input function and radon concentration the output function. This analysis is an important preliminary study for modelling. By that way the importance of soil nature has been pointed out. The internal variables of the medium (permeability, porosity) appear to restrain the influence of the environmental variables such as humidity, temperature or atmospheric pressure. (author)

  12. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  13. TEC variability near northern EIA crest and comparison with IRI model

    Science.gov (United States)

    Aggarwal, Malini

    2011-10-01

    Monthly median values of hourly total electron content (TEC) is obtained with GPS at a station near northern anomaly crest, Rajkot (geog. 22.29°N, 70.74°E; geomag. 14.21°N, 144.9°E) to study the variability of low latitude ionospheric behavior during low solar activity period (April 2005 to March 2006). The TEC exhibit characteristic features like day-to-day variability, semiannual anomaly and noon bite out. The observed TEC is compared with latest International Reference Ionosphere (IRI) - 2007 model using options of topside electron density, NeQuick, IRI01-corr and IRI-2001 by using both URSI and CCIR coefficients. A good agreement of observed and predicted TEC is found during the daytime with underestimation at other times. The predicted TEC by NeQuick and IRI01-corr is closer to the observed TEC during the daytime whereas during nighttime and morning hours, IRI-2001 shows lesser discrepancy in all seasons by both URSI and CCIR coefficients.

  14. Spectroscopic properties of a two-dimensional time-dependent Cepheid model. I. Description and validation of the model

    Science.gov (United States)

    Vasilyev, V.; Ludwig, H.-G.; Freytag, B.; Lemasle, B.; Marconi, M.

    2017-10-01

    Context. Standard spectroscopic analyses of Cepheid variables are based on hydrostatic one-dimensional model atmospheres, with convection treated using various formulations of mixing-length theory. Aims: This paper aims to carry out an investigation of the validity of the quasi-static approximation in the context of pulsating stars. We check the adequacy of a two-dimensional time-dependent model of a Cepheid-like variable with focus on its spectroscopic properties. Methods: With the radiation-hydrodynamics code CO5BOLD, we construct a two-dimensional time-dependent envelope model of a Cepheid with Teff = 5600 K, log g = 2.0, solar metallicity, and a 2.8-day pulsation period. Subsequently, we perform extensive spectral syntheses of a set of artificial iron lines in local thermodynamic equilibrium. The set of lines allows us to systematically study effects of line strength, ionization stage, and excitation potential. Results: We evaluate the microturbulent velocity, line asymmetry, projection factor, and Doppler shifts. The microturbulent velocity, averaged over all lines, depends on the pulsational phase and varies between 1.5 and 2.7 km s-1. The derived projection factor lies between 1.23 and 1.27, which agrees with observational results. The mean Doppler shift is non-zero and negative, -1 km s-1, after averaging over several full periods and lines. This residual line-of-sight velocity (related to the "K-term") is primarily caused by horizontal inhomogeneities, and consequently we interpret it as the familiar convective blueshift ubiquitously present in non-pulsating late-type stars. Limited statistics prevent firm conclusions on the line asymmetries. Conclusions: Our two-dimensional model provides a reasonably accurate representation of the spectroscopic properties of a short-period Cepheid-like variable star. Some properties are primarily controlled by convective inhomogeneities rather than by the Cepheid-defining pulsations. Extended multi-dimensional modelling

  15. Inter-annual variability of the atmospheric carbon dioxide concentrations as simulated with global terrestrial biosphere models and an atmospheric transport model

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, Daisuke; Saeki, Tazu; Nakazawa, Takakiyo [Tohoku Univ., Sendai (Japan). Center for Atmospheric and Oceanic Studies; Ishizawa, Misa; Maksyutov, Shamil [Inst. for Global Change Research, Yokohama (Japan). Frontier Research System for Global Change; Thornton, Peter E. [National Center for Atmospheric Research, Boulder, CO (United States). Climate and Global Dynamics Div.

    2003-04-01

    Seasonal and inter-annual variations of atmospheric CO{sub 2} for the period from 1961 to 1997 have been simulated using a global tracer transport model driven by a new version of the Biome BioGeochemical Cycle model (Biome-BGC). Biome-BGC was forced by daily temperature and precipitation from the NCEP reanalysis dataset, and the calculated monthly-averaged CO{sub 2} fluxes were used as input to the global transport model. Results from an inter-comparison with the Carnegie-Ames-Stanford Approach model (CASA) and the Simulation model of Carbon CYCLE in Land Ecosystems (Sim-CYCLE) model are also reported. The phase of the seasonal cycle in the Northern Hemisphere was reproduced generally well by Biome-BGC, although the amplitude was smaller compared to the observations and to the other biosphere models. The CO{sub 2} time series simulated by Biome-BGC were compared to the global CO{sub 2} concentration anomalies from the observations at Mauna Loa and the South Pole. The modeled concentration anomalies matched the phase of the inter-annual variations in the atmospheric CO{sub 2} observations; however, the modeled amplitude was lower than the observed value in several cases. The result suggests that a significant part of the inter-annual variability in the global carbon cycle can be accounted for by the terrestrial biosphere models. Simulations performed with another climate-based model, Sim-CYCLE, produced a larger amplitude of inter-annual variability in atmospheric CO{sub 2}, making the amplitude closer to the observed range, but with a more visible phase mismatch in a number of time periods. This may indicate the need to increase the Biome-BGC model sensitivity to seasonal and inter-annual changes in temperature and precipitation.

  16. Inter-annual variability of the atmospheric carbon dioxide concentrations as simulated with global terrestrial biosphere models and an atmospheric transport model

    International Nuclear Information System (INIS)

    Fujita, Daisuke; Saeki, Tazu; Nakazawa, Takakiyo; Ishizawa, Misa; Maksyutov, Shamil; Thornton, Peter E.

    2003-01-01

    Seasonal and inter-annual variations of atmospheric CO 2 for the period from 1961 to 1997 have been simulated using a global tracer transport model driven by a new version of the Biome BioGeochemical Cycle model (Biome-BGC). Biome-BGC was forced by daily temperature and precipitation from the NCEP reanalysis dataset, and the calculated monthly-averaged CO 2 fluxes were used as input to the global transport model. Results from an inter-comparison with the Carnegie-Ames-Stanford Approach model (CASA) and the Simulation model of Carbon CYCLE in Land Ecosystems (Sim-CYCLE) model are also reported. The phase of the seasonal cycle in the Northern Hemisphere was reproduced generally well by Biome-BGC, although the amplitude was smaller compared to the observations and to the other biosphere models. The CO 2 time series simulated by Biome-BGC were compared to the global CO 2 concentration anomalies from the observations at Mauna Loa and the South Pole. The modeled concentration anomalies matched the phase of the inter-annual variations in the atmospheric CO 2 observations; however, the modeled amplitude was lower than the observed value in several cases. The result suggests that a significant part of the inter-annual variability in the global carbon cycle can be accounted for by the terrestrial biosphere models. Simulations performed with another climate-based model, Sim-CYCLE, produced a larger amplitude of inter-annual variability in atmospheric CO 2 , making the amplitude closer to the observed range, but with a more visible phase mismatch in a number of time periods. This may indicate the need to increase the Biome-BGC model sensitivity to seasonal and inter-annual changes in temperature and precipitation

  17. Dissociating neural variability related to stimulus quality and response times in perceptual decision-making.

    Science.gov (United States)

    Bode, Stefan; Bennett, Daniel; Sewell, David K; Paton, Bryan; Egan, Gary F; Smith, Philip L; Murawski, Carsten

    2018-03-01

    According to sequential sampling models, perceptual decision-making is based on accumulation of noisy evidence towards a decision threshold. The speed with which a decision is reached is determined by both the quality of incoming sensory information and random trial-by-trial variability in the encoded stimulus representations. To investigate those decision dynamics at the neural level, participants made perceptual decisions while functional magnetic resonance imaging (fMRI) was conducted. On each trial, participants judged whether an image presented under conditions of high, medium, or low visual noise showed a piano or a chair. Higher stimulus quality (lower visual noise) was associated with increased activation in bilateral medial occipito-temporal cortex and ventral striatum. Lower stimulus quality was related to stronger activation in posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC). When stimulus quality was fixed, faster response times were associated with a positive parametric modulation of activation in medial prefrontal and orbitofrontal cortex, while slower response times were again related to more activation in PPC, DLPFC and insula. Our results suggest that distinct neural networks were sensitive to the quality of stimulus information, and to trial-to-trial variability in the encoded stimulus representations, but that reaching a decision was a consequence of their joint activity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. BehavePlus fire modeling system, version 5.0: Variables

    Science.gov (United States)

    Patricia L. Andrews

    2009-01-01

    This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...

  19. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mather, Barry

    2017-08-24

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.

  20. A Time-Regularized, Multiple Gravity-Assist Low-Thrust, Bounded-Impulse Model for Trajectory Optimization

    Science.gov (United States)

    Ellison, Donald H.; Englander, Jacob A.; Conway, Bruce A.

    2017-01-01

    The multiple gravity assist low-thrust (MGALT) trajectory model combines the medium-fidelity Sims-Flanagan bounded-impulse transcription with a patched-conics flyby model and is an important tool for preliminary trajectory design. While this model features fast state propagation via Keplers equation and provides a pleasingly accurate estimation of the total mass budget for the eventual flight suitable integrated trajectory it does suffer from one major drawback, namely its temporal spacing of the control nodes. We introduce a variant of the MGALT transcription that utilizes the generalized anomaly from the universal formulation of Keplers equation as a decision variable in addition to the trajectory phase propagation time. This results in two improvements over the traditional model. The first is that the maneuver locations are equally spaced in generalized anomaly about the orbit rather than time. The second is that the Kepler propagator now has the generalized anomaly as its independent variable instead of time and thus becomes an iteration-free propagation method. The new algorithm is outlined, including the impact that this has on the computation of Jacobian entries for numerical optimization, and a motivating application problem is presented that illustrates the improvements that this model has over the traditional MGALT transcription.

  1. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series

    KAUST Repository

    Martinez, Josue G.; Bohn, Kirsten M.; Carroll, Raymond J.; Morris, Jeffrey S.

    2013-01-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible.

  2. A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series

    KAUST Repository

    Martinez, Josue G.

    2013-06-01

    We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible.

  3. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    Science.gov (United States)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  4. Comprehensive Modeling and Analysis of Rotorcraft Variable Speed Propulsion System With Coupled Engine/Transmission/Rotor Dynamics

    Science.gov (United States)

    DeSmidt, Hans A.; Smith, Edward C.; Bill, Robert C.; Wang, Kon-Well

    2013-01-01

    This project develops comprehensive modeling and simulation tools for analysis of variable rotor speed helicopter propulsion system dynamics. The Comprehensive Variable-Speed Rotorcraft Propulsion Modeling (CVSRPM) tool developed in this research is used to investigate coupled rotor/engine/fuel control/gearbox/shaft/clutch/flight control system dynamic interactions for several variable rotor speed mission scenarios. In this investigation, a prototypical two-speed Dual-Clutch Transmission (DCT) is proposed and designed to achieve 50 percent rotor speed variation. The comprehensive modeling tool developed in this study is utilized to analyze the two-speed shift response of both a conventional single rotor helicopter and a tiltrotor drive system. In the tiltrotor system, both a Parallel Shift Control (PSC) strategy and a Sequential Shift Control (SSC) strategy for constant and variable forward speed mission profiles are analyzed. Under the PSC strategy, selecting clutch shift-rate results in a design tradeoff between transient engine surge margins and clutch frictional power dissipation. In the case of SSC, clutch power dissipation is drastically reduced in exchange for the necessity to disengage one engine at a time which requires a multi-DCT drive system topology. In addition to comprehensive simulations, several sections are dedicated to detailed analysis of driveline subsystem components under variable speed operation. In particular an aeroelastic simulation of a stiff in-plane rotor using nonlinear quasi-steady blade element theory was conducted to investigate variable speed rotor dynamics. It was found that 2/rev and 4/rev flap and lag vibrations were significant during resonance crossings with 4/rev lagwise loads being directly transferred into drive-system torque disturbances. To capture the clutch engagement dynamics, a nonlinear stick-slip clutch torque model is developed. Also, a transient gas-turbine engine model based on first principles mean

  5. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  6. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    Science.gov (United States)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

  7. Verifying Real-Time Systems using Explicit-time Description Methods

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking has been extensively researched in recent years. Many new formalisms with time extensions and tools based on them have been presented. On the other hand, Explicit-Time Description Methods aim to verify real-time systems with general untimed model checkers. Lamport presented an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables for time requirements. This paper proposes a new explicit-time description method with no reliance on global variables. Instead, it uses rendezvous synchronization steps between the Tick process and each system process to simulate time. This new method achieves better modularity and facilitates usage of more complex timing constraints. The two explicit-time description methods are implemented in DIVINE, a well-known distributed-memory model checker. Preliminary experiment results show that our new method, with better modularity, is comparable to Lamport's method with respect to time and memory efficiency.

  8. A Model for Positively Correlated Count Variables

    DEFF Research Database (Denmark)

    Møller, Jesper; Rubak, Ege Holger

    2010-01-01

    An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields...... and their potential applications. The purpose of this paper is to summarize useful probabilistic results, study stochastic constructions and simulation techniques, and discuss some examples of α-permanental random fields. This should provide a useful basis for discussing the statistical aspects in future work....

  9. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    Science.gov (United States)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective

  10. Interacting ghost dark energy models with variable G and Λ

    Science.gov (United States)

    Sadeghi, J.; Khurshudyan, M.; Movsisyan, A.; Farahani, H.

    2013-12-01

    In this paper we consider several phenomenological models of variable Λ. Model of a flat Universe with variable Λ and G is accepted. It is well known, that varying G and Λ gives rise to modified field equations and modified conservation laws, which gives rise to many different manipulations and assumptions in literature. We will consider two component fluid, which parameters will enter to Λ. Interaction between fluids with energy densities ρ1 and ρ2 assumed as Q = 3Hb(ρ1+ρ2). We have numerical analyze of important cosmological parameters like EoS parameter of the composed fluid and deceleration parameter q of the model.

  11. Representing general theoretical concepts in structural equation models: The role of composite variables

    Science.gov (United States)

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  12. Predictor variables for half marathon race time in recreational female runners

    OpenAIRE

    Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rosemann, Thomas; Lepers, Romuald

    2011-01-01

    INTRODUCTION: The relationship between skin-fold thickness and running performance has been investigated from 100 m to the marathon distance, except the half marathon distance. OBJECTIVE: To investigate whether anthropometry characteristics or training practices were related to race time in 42 recreational female half marathoners to determine the predictor variables of half-marathon race time and to inform future novice female half marathoners. METHODS: Observational field study at the ‘Half ...

  13. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    Science.gov (United States)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  14. Modeling the Variable Heliopause Location

    Science.gov (United States)

    Hensley, Kerry

    2018-03-01

    In 2012, Voyager 1 zipped across the heliopause. Five and a half years later, Voyager 2 still hasnt followed its twin into interstellar space. Can models of the heliopause location help determine why?How Far to the Heliopause?Artists conception of the heliosphere with the important structures and boundaries labeled. [NASA/Goddard/Walt Feimer]As our solar system travels through the galaxy, the solar outflow pushes against the surrounding interstellar medium, forming a bubble called the heliosphere. The edge of this bubble, the heliopause, is the outermost boundary of our solar system, where the solar wind and the interstellar medium meet. Since the solar outflow is highly variable, the heliopause is constantly moving with the motion driven by changes inthe Sun.NASAs twin Voyager spacecraft were poisedto cross the heliopause after completingtheir tour of the outer planets in the 1980s. In 2012, Voyager 1 registered a sharp increase in the density of interstellar particles, indicating that the spacecraft had passed out of the heliosphere and into the interstellar medium. The slower-moving Voyager 2 was set to pierce the heliopause along a different trajectory, but so far no measurements have shown that the spacecraft has bid farewell to oursolar system.In a recent study, ateam of scientists led by Haruichi Washimi (Kyushu University, Japan and CSPAR, University of Alabama-Huntsville) argues that models of the heliosphere can help explain this behavior. Because the heliopause location is controlled by factors that vary on many spatial and temporal scales, Washimiand collaborators turn to three-dimensional, time-dependent magnetohydrodynamics simulations of the heliosphere. In particular, they investigate how the position of the heliopause along the trajectories of Voyager 1 and Voyager 2 changes over time.Modeled location of the heliopause along the paths of Voyagers 1 (blue) and 2 (orange). Click for a closer look. The red star indicates the location at which Voyager

  15. Exploiting maximum energy from variable speed wind power generation systems by using an adaptive Takagi-Sugeno-Kang fuzzy model

    International Nuclear Information System (INIS)

    Galdi, V.; Piccolo, A.; Siano, P.

    2009-01-01

    Nowadays, incentives and financing options for developing renewable energy facilities and the new development in variable speed wind technology make wind energy a competitive source if compared with conventional generation ones. In order to improve the effectiveness of variable speed wind systems, adaptive control systems able to cope with time variances of the system under control are necessary. On these basis, a data driven designing methodology for TSK fuzzy models design is presented in this paper. The methodology, on the basis of given input-output numerical data, generates the 'best' TSK fuzzy model able to estimate with high accuracy the maximum extractable power from a variable speed wind turbine. The design methodology is based on fuzzy clustering methods for partitioning the input-output space combined with genetic algorithms (GA), and recursive least-squares (LS) optimization methods for model parameter adaptation

  16. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  17. Effect of input data variability on estimations of the equivalent constant temperature time for microbial inactivation by HTST and retort thermal processing.

    Science.gov (United States)

    Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo

    2011-08-01

    Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over

  18. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  19. On the physical processes which lie at the bases of time variability of GRBs

    International Nuclear Information System (INIS)

    Ruffini, R.; Bianco, C. L.; Fraschetti, F.; Xue, S-S.

    2001-01-01

    The relative-space-time-transformation (RSTT) paradigm and the interpretation of the burst-structure (IBS) paradigm are applied to probe the origin of the time variability of GRBs. Again GRB 991216 is used as a prototypical case, thanks to the precise data from the CGRO, RXTE and Chandra satellites. It is found that with the exception of the relatively inconspicuous but scientifically very important signal originating from the initial proper gamma ray burst (P-GRB), all the other spikes and time variabilities can be explained by the interaction of the accelerated-baryonic-matter pulse with inhomogeneities in the interstellar matter. This can be demonstrated by using the RSTT paradigm as well as the IBS paradigm, to trace a typical spike observed in arrival time back to the corresponding one in the laboratory time. Using these paradigms, the identification of the physical nature of the time variability of the GRBs can be made most convincingly. It is made explicit the dependence of a) the intensities of the afterglow, b) the spikes amplitude and c) the actual time structure on the Lorentz gamma factor of the accelerated-baryonic-matter pulse. In principle it is possible to read off from the spike structure the detailed density contrast of the interstellar medium in the host galaxy, even at very high redshift

  20. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  1. Antipersistent dynamics in short time scale variability of self-potential signals

    Directory of Open Access Journals (Sweden)

    M. Ragosta

    2000-06-01

    Full Text Available Time scale properties of self-potential signals are investigated through the analysis of the second order structure function (variogram, a powerful tool to investigate the spatial and temporal variability of observational data. In this work we analyse two sequences of self-potential values measured by means of a geophysical monitoring array located in a seismically active area of Southern Italy. The range of scales investigated goes from a few minutes to several days. It is shown that signal fluctuations are characterised by two time scale ranges in which self-potential variability appears to follow slightly different dynamical behaviours. Results point to the presence of fractal, non stationary features expressing a long term correlation with scaling coefficients which are the clue of stabilising mechanisms. In the scale ranges in which the series show scale invariant behaviour, self-potentials evolve like fractional Brownian motions with anticorrelated increments typical of processes regulated by negative feedback mechanisms (antipersistence. On scales below about 6 h the strength of such an antipersistence appears to be slightly greater than that observed on larger time scales where the fluctuations are less efficiently stabilised.

  2. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  3. Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales

    Science.gov (United States)

    Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.

    2009-01-01

    The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…

  4. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  5. Multiscale thermohydrologic model: addressing variability and uncertainty at Yucca Mountain

    International Nuclear Information System (INIS)

    Buscheck, T; Rosenberg, N D; Gansemer, J D; Sun, Y

    2000-01-01

    Performance assessment and design evaluation require a modeling tool that simultaneously accounts for processes occurring at a scale of a few tens of centimeters around individual waste packages and emplacement drifts, and also on behavior at the scale of the mountain. Many processes and features must be considered, including non-isothermal, multiphase-flow in rock of variable saturation and thermal radiation in open cavities. Also, given the nature of the fractured rock at Yucca Mountain, a dual-permeability approach is needed to represent permeability. A monolithic numerical model with all these features requires too large a computational cost to be an effective simulation tool, one that is used to examine sensitivity to key model assumptions and parameters. We have developed a multi-scale modeling approach that effectively simulates 3D discrete-heat-source, mountain-scale thermohydrologic behavior at Yucca Mountain and captures the natural variability of the site consistent with what we know from site characterization and waste-package-to-waste-package variability in heat output. We describe this approach and present results examining the role of infiltration flux, the most important natural-system parameter with respect to how thermohydrologic behavior influences the performance of the repository

  6. Using nonparametrics to specify a model to measure the value of travel time

    DEFF Research Database (Denmark)

    Fosgerau, Mogens

    2007-01-01

    Using a range of nonparametric methods, the paper examines the specification of a model to evaluate the willingness-to-pay (WTP) for travel time changes from binomial choice data from a simple time-cost trading experiment. The analysis favours a model with random WTP as the only source...... of randomness over a model with fixed WTP which is linear in time and cost and has an additive random error term. Results further indicate that the distribution of log WTP can be described as a sum of a linear index fixing the location of the log WTP distribution and an independent random variable representing...... unobserved heterogeneity. This formulation is useful for parametric modelling. The index indicates that the WTP varies systematically with income and other individual characteristics. The WTP varies also with the time difference presented in the experiment which is in contradiction of standard utility theory....

  7. A Mixed Model for Real-Time, Interactive Simulation of a Cable Passing Through Several Pulleys

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Fernandez, Ignacio; Pla-Castells, Marta; Martinez-Dura, Rafael J [Instituto de Robotica. Universidad de Valencia (Spain)

    2007-09-06

    A model of a cable and pulleys is presented that can be used in Real Time Computer Graphics applications. The model is formulated by the coupling of a damped spring and a variable coefficient wave equation, and can be integrated in more complex mechanical models of lift systems, such as cranes, elevators, etc. with a high degree of interactivity.

  8. A Mixed Model for Real-Time, Interactive Simulation of a Cable Passing Through Several Pulleys

    International Nuclear Information System (INIS)

    Garcia-Fernandez, Ignacio; Pla-Castells, Marta; Martinez-Dura, Rafael J.

    2007-01-01

    A model of a cable and pulleys is presented that can be used in Real Time Computer Graphics applications. The model is formulated by the coupling of a damped spring and a variable coefficient wave equation, and can be integrated in more complex mechanical models of lift systems, such as cranes, elevators, etc. with a high degree of interactivity

  9. Synthesis of Biochemical Applications on Digital Microfluidic Biochips with Operation Execution Time Variability

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul

    2015-01-01

    that each biochemical operation in an application is characterized by a worst-case execution time (wcet). However, during the execution of the application, due to variability and randomness in biochemical reactions, operations may finish earlier than their wcetswcets, resulting in unexploited slack...... in the schedule. In this paper, we first propose an online synthesis strategy that re-synthesizes the application at runtime when operations experience variability in their execution time, exploiting thus the slack to obtain shorter application completion times. We also propose a quasi-static synthesis strategy...... approaches have been proposed for the synthesis of digital microfluidic biochips, which, starting from a biochemical application and a given biochip architecture, determine the allocation, resource binding, scheduling, placement and routing of the operations in the application. Researchers have assumed...

  10. Multi-Objective Flexible Flow Shop Scheduling Problem Considering Variable Processing Time due to Renewable Energy

    Directory of Open Access Journals (Sweden)

    Xiuli Wu

    2018-03-01

    Full Text Available Renewable energy is an alternative to non-renewable energy to reduce the carbon footprint of manufacturing systems. Finding out how to make an alternative energy-efficient scheduling solution when renewable and non-renewable energy drives production is of great importance. In this paper, a multi-objective flexible flow shop scheduling problem that considers variable processing time due to renewable energy (MFFSP-VPTRE is studied. First, the optimization model of the MFFSP-VPTRE is formulated considering the periodicity of renewable energy and the limitations of energy storage capacity. Then, a hybrid non-dominated sorting genetic algorithm with variable local search (HNSGA-II is proposed to solve the MFFSP-VPTRE. An operation and machine-based encoding method is employed. A low-carbon scheduling algorithm is presented. Besides the crossover and mutation, a variable local search is used to improve the offspring’s Pareto set. The offspring and the parents are combined and those that dominate more are selected to continue evolving. Finally, two groups of experiments are carried out. The results show that the low-carbon scheduling algorithm can effectively reduce the carbon footprint under the premise of makespan optimization and the HNSGA-II outperforms the traditional NSGA-II and can solve the MFFSP-VPTRE effectively and efficiently.

  11. Solar Cycle Variability Induced by Tilt Angle Scatter in a Babcock-Leighton Solar Dynamo Model

    Science.gov (United States)

    Karak, Bidya Binay; Miesch, Mark

    2017-09-01

    We present results from a three-dimensional Babcock-Leighton (BL) dynamo model that is sustained by the emergence and dispersal of bipolar magnetic regions (BMRs). On average, each BMR has a systematic tilt given by Joy’s law. Randomness and nonlinearity in the BMR emergence of our model produce variable magnetic cycles. However, when we allow for a random scatter in the tilt angle to mimic the observed departures from Joy’s law, we find more variability in the magnetic cycles. We find that the observed standard deviation in Joy’s law of {σ }δ =15^\\circ produces a variability comparable to the observed solar cycle variability of ˜32%, as quantified by the sunspot number maxima between 1755 and 2008. We also find that tilt angle scatter can promote grand minima and grand maxima. The time spent in grand minima for {σ }δ =15^\\circ is somewhat less than that inferred for the Sun from cosmogenic isotopes (about 9% compared to 17%). However, when we double the tilt scatter to {σ }δ =30^\\circ , the simulation statistics are comparable to the Sun (˜18% of the time in grand minima and ˜10% in grand maxima). Though the BL mechanism is the only source of poloidal field, we find that our simulations always maintain magnetic cycles even at large fluctuations in the tilt angle. We also demonstrate that tilt quenching is a viable and efficient mechanism for dynamo saturation; a suppression of the tilt by only 1°-2° is sufficient to limit the dynamo growth. Thus, any potential observational signatures of tilt quenching in the Sun may be subtle.

  12. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations......, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...

  13. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Frew, Bethany A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst., Palo Alto, CA (United States); Blanford, Geoffrey [Electric Power Research Inst., Palo Alto, CA (United States); Young, David [Electric Power Research Inst., Palo Alto, CA (United States); Marcy, Cara [Energy Information Administration, Washington, DC (United States); Namovicz, Chris [Energy Information Administration, Washington, DC (United States); Edelman, Risa [Environmental Protection Agency, Washington, DC (United States); Meroney, Bill [Environmental Protection Agency; Sims, Ryan [Environmental Protection Agency; Stenhouse, Jeb [Environmental Protection Agency; Donohoo-Vallett, Paul [U.S. Department of Energy

    2017-11-03

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Power Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.

  14. Separation of variables in anisotropic models: anisotropic Rabi and elliptic Gaudin model in an external magnetic field

    Science.gov (United States)

    Skrypnyk, T.

    2017-08-01

    We study the problem of separation of variables for classical integrable Hamiltonian systems governed by non-skew-symmetric non-dynamical so(3)\\otimes so(3) -valued elliptic r-matrices with spectral parameters. We consider several examples of such models, and perform separation of variables for classical anisotropic one- and two-spin Gaudin-type models in an external magnetic field, and for Jaynes-Cummings-Dicke-type models without the rotating wave approximation.

  15. Evaluation of scalar mixing and time scale models in PDF simulations of a turbulent premixed flame

    Energy Technology Data Exchange (ETDEWEB)

    Stoellinger, Michael; Heinz, Stefan [Department of Mathematics, University of Wyoming, Laramie, WY (United States)

    2010-09-15

    Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)

  16. Time and space variability of spectral estimates of atmospheric pressure

    Science.gov (United States)

    Canavero, Flavio G.; Einaudi, Franco

    1987-01-01

    The temporal and spatial behaviors of atmospheric pressure spectra over the northern Italy and the Alpine massif were analyzed using data on surface pressure measurements carried out at two microbarograph stations in the Po Valley, one 50 km south of the Alps, the other in the foothills of the Dolomites. The first 15 days of the study overlapped with the Alpex Intensive Observation Period. The pressure records were found to be intrinsically nonstationary and were found to display substantial time variability, implying that the statistical moments depend on time. The shape and the energy content of spectra depended on different time segments. In addition, important differences existed between spectra obtained at the two stations, indicating a substantial effect of topography, particularly for periods less than 40 min.

  17. Variability of Cost and Time Delivery of Educational Buildings in Nigeria

    Directory of Open Access Journals (Sweden)

    Aghimien, Douglas Omoregie

    2017-09-01

    Full Text Available Cost and time overrun in construction projects has become a reoccurring problem in construction industries around the world especially in developing countries. This situation is unhealthy for public educational buildings which are executed with limited government funds, and are in most cases time sensitive, as they need to cater for the influx of students into the institutions. This study therefore assessed the variability of cost and time delivery of educational buildings in Nigeria, using a study of selected educational buildings within the country. A pro forma was used to gather cost and time data on selected building projects, while structured questionnaire was used to harness information on the possible measures for reducing the variability from the construction participants that were involved in the delivery of these projects. Paired sample t-test, percentage, relative importance index, and Kruskal-Walis test were adopted for data analyses. The study reveals that there is a significant difference between the initial and final cost of delivering educational buildings, as an average of 4.87% deviation, with a sig. p-value of 0.000 was experienced on all assessed projects. For time delivery, there is also a significant difference between the initial estimated time and final time of construction as a whopping 130% averaged deviation with a sig. p-value of 0.000 was discovered. To remedy these problems, the study revealed that prompt payment for executed works, predicting market price fluctuation and inculcating it into the initial estimate, and owner’s involvement at the planning and design phase are some of the possible measures to be adopted.

  18. Variable School Start Times and Middle School Student's Sleep Health and Academic Performance.

    Science.gov (United States)

    Lewin, Daniel S; Wang, Guanghai; Chen, Yao I; Skora, Elizabeth; Hoehn, Jessica; Baylor, Allison; Wang, Jichuan

    2017-08-01

    Improving sleep health among adolescents is a national health priority and implementing healthy school start times (SSTs) is an important strategy to achieve these goals. This study leveraged the differences in middle school SST in a large district to evaluate associations between SST, sleep health, and academic performance. This cross-sectional study draws data from a county-wide surveillance survey. Participants were three cohorts of eighth graders (n = 26,440). The school district is unique because SST ranged from 7:20 a.m. to 8:10 a.m. Path analysis and probit regression were used to analyze associations between SST and self-report measures of weekday sleep duration, grades, and homework controlling for demographic variables (sex, race, and socioeconomic status). The independent contributions of SST and sleep duration to academic performance were also analyzed. Earlier SST was associated with decreased sleep duration (χ 2  = 173, p academic performance, and academic effort. Path analysis models demonstrated the independent contributions of sleep duration, SST, and variable effects for demographic variables. This is the first study to evaluate the independent contributions of SST and sleep to academic performance in a large sample of middle school students. Deficient sleep was prevalent, and the earliest SST was associated with decrements in sleep and academics. These findings support the prioritization of policy initiatives to implement healthy SST for younger adolescents and highlight the importance of sleep health education disparities among race and gender groups. Copyright © 2017 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  19. A hydrochemical modelling framework for combined assessment of spatial and temporal variability in stream chemistry: application to Plynlimon, Wales

    Directory of Open Access Journals (Sweden)

    H.J. Foster

    2001-01-01

    Full Text Available Recent concern about the risk to biota from acidification in upland areas, due to air pollution and land-use change (such as the planting of coniferous forests, has generated a need to model catchment hydro-chemistry to assess environmental risk and define protection strategies. Previous approaches have tended to concentrate on quantifying either spatial variability at a regional scale or temporal variability at a given location. However, to protect biota from ‘acid episodes’, an assessment of both temporal and spatial variability of stream chemistry is required at a catchment scale. In addition, quantification of temporal variability needs to represent both episodic event response and long term variability caused by deposition and/or land-use change. Both spatial and temporal variability in streamwater chemistry are considered in a new modelling methodology based on application to the Plynlimon catchments, central Wales. A two-component End-Member Mixing Analysis (EMMA is used whereby low and high flow chemistry are taken to represent ‘groundwater’ and ‘soil water’ end-members. The conventional EMMA method is extended to incorporate spatial variability in the two end-members across the catchments by quantifying the Acid Neutralisation Capacity (ANC of each in terms of a statistical distribution. These are then input as stochastic variables to a two-component mixing model, thereby accounting for variability of ANC both spatially and temporally. The model is coupled to a long-term acidification model (MAGIC to predict the evolution of the end members and, hence, the response to future scenarios. The results can be plotted as a function of time and space, which enables better assessment of the likely effects of pollution deposition or land-use changes in the future on the stream chemistry than current methods which use catchment average values. The model is also a useful basis for further research into linkage between hydrochemistry

  20. Modelling the effects of spatial variability on radionuclide migration

    International Nuclear Information System (INIS)

    1998-01-01

    The NEA workshop reflect the present status in national waste management program, specifically in spatial variability and performance assessment of geologic disposal sites for deed repository system the four sessions were: Spatial Variability: Its Definition and Significance to Performance Assessment and Site Characterisation; Experience with the Modelling of Radionuclide Migration in the Presence of Spatial Variability in Various Geological Environments; New Areas for Investigation: Two Personal Views; What is Wanted and What is Feasible: Views and Future Plans in Selected Waste Management Organisations. The 26 papers presented on the four oral sessions and on the poster session have been abstracted and indexed individually for the INIS database. (R.P.)

  1. Multiwaveband Variability of Blazars from Turbulent Plasma Passing through a Standing Shock: The Mother of Multi-zone Models

    Science.gov (United States)

    Marscher, Alan P.

    2011-09-01

    Multi-wavelength light curves of bright gamma-ray blazars (e.g., 3C 454.3) are compared with the model proposed by Marscher and Jorstad. In this scenario, much of the optical and high-energy radiation in a blazar is emitted near the 43 GHz core of the jet as seen in VLBA images, parsecs from the central engine. The main physical features are a turbulent ambient jet plasma that passes through a standing recollimation shock in the jet. The model allows for short time-scales of optical and gamma-ray variability by restricting the highest-energy electrons radiating at these frequencies to a small fraction of the turbulent cells, perhaps those with a particular orientation of the magnetic field relative to the shock front. Because of this, the volume filling factor at high frequencies is relatively low, while that of the electrons radiating below about 10 THz is near unity. Such a model is consistent with the (1) red-noise power spectra of flux variations, (2) shorter time-scales of variability at higher frequencies, (3) frequency dependence of polarization and its variability, and (4) breaks in the synchrotron spectrum by more than the radiative loss value of 0.5. Simulated light curves are generated by a numerical code that (as of May 2011) includes synchrotron radiation as well as inverse Compton scattering of seed photons from both a dust torus and a Mach disk at the jet axis. The latter source of seed photons produces more pronounced variability in gamma-ray than in optical light curves, as is often observed. More features are expected to be added to the code by the time of the presentation. This research is supported in part by NASA through Fermi grants NNX08AV65G and NNX10AO59G, and by NSF grant AST-0907893.

  2. AeroPropulsoServoElasticity: Dynamic Modeling of the Variable Cycle Propulsion System

    Science.gov (United States)

    Kopasakis, George

    2012-01-01

    This presentation was made at the 2012 Fundamental Aeronautics Program Technical Conference and it covers research work for the Dynamic Modeling of the Variable cycle Propulsion System that was done under the Supersonics Project, in the area of AeroPropulsoServoElasticity. The presentation covers the objective for the propulsion system dynamic modeling work, followed by the work that has been done so far to model the variable Cycle Engine, modeling of the inlet, the nozzle, the modeling that has been done to model the affects of flow distortion, and finally presenting some concluding remarks and future plans.

  3. Ocean carbon and heat variability in an Earth System Model

    Science.gov (United States)

    Thomas, J. L.; Waugh, D.; Gnanadesikan, A.

    2016-12-01

    Ocean carbon and heat content are very important for regulating global climate. Furthermore, due to lack of observations and dependence on parameterizations, there has been little consensus in the modeling community on the magnitude of realistic ocean carbon and heat content variability, particularly in the Southern Ocean. We assess the differences between global oceanic heat and carbon content variability in GFDL ESM2Mc using a 500-year, pre-industrial control simulation. The global carbon and heat content are directly out of phase with each other; however, in the Southern Ocean the heat and carbon content are in phase. The global heat mutli-decadal variability is primarily explained by variability in the tropics and mid-latitudes, while the variability in global carbon content is primarily explained by Southern Ocean variability. In order to test the robustness of this relationship, we use three additional pre-industrial control simulations using different mesoscale mixing parameterizations. Three pre-industrial control simulations are conducted with the along-isopycnal diffusion coefficient (Aredi) set to constant values of 400, 800 (control) and 2400 m2 s-1. These values for Aredi are within the range of parameter settings commonly used in modeling groups. Finally, one pre-industrial control simulation is conducted where the minimum in the Gent-McWilliams parameterization closure scheme (AGM) increased to 600 m2 s-1. We find that the different simulations have very different multi-decadal variability, especially in the Weddell Sea where the characteristics of deep convection are drastically changed. While the temporal frequency and amplitude global heat and carbon content changes significantly, the overall spatial pattern of variability remains unchanged between the simulations.

  4. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  5. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  6. Predictive-property-ranked variable reduction in partial least squares modelling with final complexity adapted models: comparison of properties for ranking.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2013-01-14

    The calibration performance of partial least squares regression for one response (PLS1) can be improved by eliminating uninformative variables. Many variable-reduction methods are based on so-called predictor-variable properties or predictive properties, which are functions of various PLS-model parameters, and which may change during the steps of the variable-reduction process. Recently, a new predictive-property-ranked variable reduction method with final complexity adapted models, denoted as PPRVR-FCAM or simply FCAM, was introduced. It is a backward variable elimination method applied on the predictive-property-ranked variables. The variable number is first reduced, with constant PLS1 model complexity A, until A variables remain, followed by a further decrease in PLS complexity, allowing the final selection of small numbers of variables. In this study for three data sets the utility and effectiveness of six individual and nine combined predictor-variable properties are investigated, when used in the FCAM method. The individual properties include the absolute value of the PLS1 regression coefficient (REG), the significance of the PLS1 regression coefficient (SIG), the norm of the loading weight (NLW) vector, the variable importance in the projection (VIP), the selectivity ratio (SR), and the squared correlation coefficient of a predictor variable with the response y (COR). The selective and predictive performances of the models resulting from the use of these properties are statistically compared using the one-tailed Wilcoxon signed rank test. The results indicate that the models, resulting from variable reduction with the FCAM method, using individual or combined properties, have similar or better predictive abilities than the full spectrum models. After mean-centring of the data, REG and SIG, provide low numbers of informative variables, with a meaning relevant to the response, and lower than the other individual properties, while the predictive abilities are

  7. Evaluation of Online Log Variables That Estimate Learners' Time Management in a Korean Online Learning Context

    Science.gov (United States)

    Jo, Il-Hyun; Park, Yeonjeong; Yoon, Meehyun; Sung, Hanall

    2016-01-01

    The purpose of this study was to identify the relationship between the psychological variables and online behavioral patterns of students, collected through a learning management system (LMS). As the psychological variable, time and study environment management (TSEM), one of the sub-constructs of MSLQ, was chosen to verify a set of time-related…

  8. Trend Estimation and Regression Analysis in Climatological Time Series: An Application of Structural Time Series Models and the Kalman Filter.

    Science.gov (United States)

    Visser, H.; Molenaar, J.

    1995-05-01

    The detection of trends in climatological data has become central to the discussion on climate change due to the enhanced greenhouse effect. To prove detection, a method is needed (i) to make inferences on significant rises or declines in trends, (ii) to take into account natural variability in climate series, and (iii) to compare output from GCMs with the trends in observed climate data. To meet these requirements, flexible mathematical tools are needed. A structural time series model is proposed with which a stochastic trend, a deterministic trend, and regression coefficients can be estimated simultaneously. The stochastic trend component is described using the class of ARIMA models. The regression component is assumed to be linear. However, the regression coefficients corresponding with the explanatory variables may be time dependent to validate this assumption. The mathematical technique used to estimate this trend-regression model is the Kaiman filter. The main features of the filter are discussed.Examples of trend estimation are given using annual mean temperatures at a single station in the Netherlands (1706-1990) and annual mean temperatures at Northern Hemisphere land stations (1851-1990). The inclusion of explanatory variables is shown by regressing the latter temperature series on four variables: Southern Oscillation index (SOI), volcanic dust index (VDI), sunspot numbers (SSN), and a simulated temperature signal, induced by increasing greenhouse gases (GHG). In all analyses, the influence of SSN on global temperatures is found to be negligible. The correlations between temperatures and SOI and VDI appear to be negative. For SOI, this correlation is significant, but for VDI it is not, probably because of a lack of volcanic eruptions during the sample period. The relation between temperatures and GHG is positive, which is in agreement with the hypothesis of a warming climate because of increasing levels of greenhouse gases. The prediction performance of

  9. a modified intervention model for gross domestic product variable

    African Journals Online (AJOL)

    observations on a variable that have been measured at ... assumption that successive values in the data file ... these interventions, one may try to evaluate the effect of ... generalized series by comparing the distinct periods. A ... the process of checking for adequacy of the model based .... As a result, the model's forecast will.

  10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  11. Principles of a simulation model for a variable-speed pitch-regulated wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Camblong, H.; Vidal, M.R.; Puiggali, J.R.

    2004-07-01

    This paper considers the basic principles for establishing a simulation- model of a variable speed, pitch regulated, wind turbine. This model is used to test various control algorithms designed with the aim of maximising energetic yield and robustness and minimising flicker emission and dynamic drive train loads. One of the most complex elements of such a system is the interaction between wind and turbine. First, a detailed and didactic analysis of this interaction is given. This is used to understand some complicated phenomena, and to help design a simpler and more efficient (in terms of processing time) mathematical model. Additional submodels are given for the mechanical coupling, the pitch system and the electrical power system, before the entire model is validated by comparison with filed measurements on a 180 kW turbine. The complete simulation model is flexible, efficient and allows easy evaluation of different control algorithms. (author)

  12. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  13. Optimization of a shorter variable-acquisition time for legs to achieve true whole-body PET/CT images.

    Science.gov (United States)

    Umeda, Takuro; Miwa, Kenta; Murata, Taisuke; Miyaji, Noriaki; Wagatsuma, Kei; Motegi, Kazuki; Terauchi, Takashi; Koizumi, Mitsuru

    2017-12-01

    The present study aimed to qualitatively and quantitatively evaluate PET images as a function of acquisition time for various leg sizes, and to optimize a shorter variable-acquisition time protocol for legs to achieve better qualitative and quantitative accuracy of true whole-body PET/CT images. The diameters of legs to be modeled as phantoms were defined based on data derived from 53 patients. This study analyzed PET images of a NEMA phantom and three plastic bottle phantoms (diameter, 5.68, 8.54 and 10.7 cm) that simulated the human body and legs, respectively. The phantoms comprised two spheres (diameters, 10 and 17 mm) containing fluorine-18 fluorodeoxyglucose solution with sphere-to-background ratios of 4 at a background radioactivity level of 2.65 kBq/mL. All PET data were reconstructed with acquisition times ranging from 10 to 180, and 1200 s. We visually evaluated image quality and determined the coefficient of variance (CV) of the background, contrast and the quantitative %error of the hot spheres, and then determined two shorter variable-acquisition protocols for legs. Lesion detectability and quantitative accuracy determined based on maximum standardized uptake values (SUV max ) in PET images of a patient using the proposed protocols were also evaluated. A larger phantom and a shorter acquisition time resulted in increased background noise on images and decreased the contrast in hot spheres. A visual score of ≥ 1.5 was obtained when the acquisition time was ≥ 30 s for three leg phantoms, and ≥ 120 s for the NEMA phantom. The quantitative %errors of the 10- and 17-mm spheres in the leg phantoms were ± 15 and ± 10%, respectively, in PET images with a high CV (scan mean SUV max of three lesions using the current fixed-acquisition and two proposed variable-acquisition time protocols in the clinical study were 3.1, 3.1 and 3.2, respectively, which did not significantly differ. Leg acquisition time per bed position of even 30-90

  14. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    Science.gov (United States)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  15. Economic Statistical Design of Variable Sampling Interval X¯$\\overline X $ Control Chart Based on Surrogate Variable Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lee Tae-Hoon

    2016-12-01

    Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.

  16. Modeling key processes causing climate change and variability

    Energy Technology Data Exchange (ETDEWEB)

    Henriksson, S.

    2013-09-01

    Greenhouse gas warming, internal climate variability and aerosol climate effects are studied and the importance to understand these key processes and being able to separate their influence on the climate is discussed. Aerosol-climate model ECHAM5-HAM and the COSMOS millennium model consisting of atmospheric, ocean and carbon cycle and land-use models are applied and results compared to measurements. Topics at focus are climate sensitivity, quasiperiodic variability with a period of 50-80 years and variability at other timescales, climate effects due to aerosols over India and climate effects of northern hemisphere mid- and high-latitude volcanic eruptions. The main findings of this work are (1) pointing out the remaining challenges in reducing climate sensitivity uncertainty from observational evidence, (2) estimates for the amplitude of a 50-80 year quasiperiodic oscillation in global mean temperature ranging from 0.03 K to 0.17 K and for its phase progression as well as the synchronising effect of external forcing, (3) identifying a power law shape S(f) {proportional_to} f-{alpha} for the spectrum of global mean temperature with {alpha} {approx} 0.8 between multidecadal and El Nino timescales with a smaller exponent in modelled climate without external forcing, (4) separating aerosol properties and climate effects in India by season and location (5) the more efficient dispersion of secondary sulfate aerosols than primary carbonaceous aerosols in the simulations, (6) an increase in monsoon rainfall in northern India due to aerosol light absorption and a probably larger decrease due to aerosol dimming effects and (7) an estimate of mean maximum cooling of 0.19 K due to larger northern hemisphere mid- and high-latitude volcanic eruptions. The results could be applied or useful in better isolating the human-caused climate change signal, in studying the processes further and in more detail, in decadal climate prediction, in model evaluation and in emission policy

  17. Latent variable models are network models.

    Science.gov (United States)

    Molenaar, Peter C M

    2010-06-01

    Cramer et al. present an original and interesting network perspective on comorbidity and contrast this perspective with a more traditional interpretation of comorbidity in terms of latent variable theory. My commentary focuses on the relationship between the two perspectives; that is, it aims to qualify the presumed contrast between interpretations in terms of networks and latent variables.

  18. Age and Sex Differences in Intra-Individual Variability in a Simple Reaction Time Task

    Science.gov (United States)

    Ghisletta, Paolo; Renaud, Olivier; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2018-01-01

    While age effects in reaction time (RT) tasks across the lifespan are well established for level of performance, analogous findings have started appearing also for indicators of intra-individual variability (IIV). Children are not only slower, but also display more variability than younger adults in RT. Yet, little is known about potential…

  19. Seasonal and interannual variability of the water exchange in the Turkish Straits System estimated by modelling

    Directory of Open Access Journals (Sweden)

    V. MADERICH

    2015-07-01

    Full Text Available A chain of simple linked models is used to simulate the seasonal and interannual variability of the Turkish Straits System. This chain includes two-layer hydraulic models of the Bosphorus and Dardanelles straits simulating the exchange in terms of level and density difference along each strait, and a one-dimensional area averaged layered model of the Marmara Sea. The chain of models is complemented also by the similar layered model of the Black Sea proper and by a one-layer Azov Sea model with the Kerch Strait. This linked chain of models is used to study the seasonal and interannual variability of the system in the period 1970-2009. The salinity of the Black Sea water flowing into the Aegean Sea increases by approximately 1.7 times through entrainment from the lower layer. The flow entering into the lower layer of the Dardanelles Strait from the Aegean Sea is reduced by nearly 80% when it reaches the Black Sea. In the seasonal scale, a maximal transport in the upper layer and minimal transport in the bottom layer are during winter/spring for the Bosphorus and in spring for the Dardanelles Strait, whereas minimal transport in upper layer and maximal undercurrent are during the summer for the Bosphorus Strait and autumn for the Dardanelles Strait. The increase of freshwater flux into the Black Sea in interannual time scales (41 m3s-1 per year is accompanied by a more than twofold growth of the Dardanelles outflow to the North Aegean (102 m3s-1 per year.

  20. a Latent Variable Path Analysis Model of Secondary Physics Enrollments in New York State.

    Science.gov (United States)

    Sobolewski, Stanley John

    The Percentage of Enrollment in Physics (PEP) at the secondary level nationally has been approximately 20% for the past few decades. For a more scientifically literate citizenry as well as specialists to continue scientific research and development, it is desirable that more students enroll in physics. Some of the predictor variables for physics enrollment and physics achievement that have been identified previously includes a community's socioeconomic status, the availability of physics, the sex of the student, the curriculum, as well as teacher and student data. This study isolated and identified predictor variables for PEP of secondary schools in New York. Data gathered by the State Education Department for the 1990-1991 school year was used. The source of this data included surveys completed by teachers and administrators on student characteristics and school facilities. A data analysis similar to that done by Bryant (1974) was conducted to determine if the relationships between a set of predictor variables related to physics enrollment had changed in the past 20 years. Variables which were isolated included: community, facilities, teacher experience, number of type of science courses, school size and school science facilities. When these variables were isolated, latent variable path diagrams were proposed and verified by the Linear Structural Relations computer modeling program (LISREL). These diagrams differed from those developed by Bryant in that there were more manifest variables used which included achievement scores in the form of Regents exam results. Two criterion variables were used, percentage of students enrolled in physics (PEP) and percent of students enrolled passing the Regents physics exam (PPP). The first model treated school and community level variables as exogenous while the second model treated only the community level variables as exogenous. The goodness of fit indices for the models was 0.77 for the first model and 0.83 for the second