WorldWideScience

Sample records for models produce time

  1. Monetary Shocks in Models with Inattentive Producers.

    Science.gov (United States)

    Alvarez, Fernando E; Lippi, Francesco; Paciello, Luigi

    2016-04-01

    We study models where prices respond slowly to shocks because firms are rationally inattentive. Producers must pay a cost to observe the determinants of the current profit maximizing price, and hence observe them infrequently. To generate large real effects of monetary shocks in such a model the time between observations must be long and/or highly volatile. Previous work on rational inattentiveness has allowed for observation intervals that are either constant-but-long ( e.g . Caballero, 1989 or Reis, 2006) or volatile-but-short ( e.g . Reis's, 2006 example where observation costs are negligible), but not both. In these models, the real effects of monetary policy are small for realistic values of the duration between observations. We show that non-negligible observation costs produce both of these effects: intervals between observations are infrequent and volatile. This generates large real effects of monetary policy for realistic values of the average time between observations.

  2. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  3. Emissions Models and Other Methods to Produce Emission Inventories

    Science.gov (United States)

    An emissions inventory is a summary or forecast of the emissions produced by a group of sources in a given time period. Inventories of air pollution from mobile sources are often produced by models such as the MOtor Vehicle Emission Simulator (MOVES).

  4. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...... and the model has been extended to fit these data....

  5. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  6. Modeling the competition between PHA-producing and non-PHA-producing bacteria in feast-famine SBR and staged CSTR systems.

    Science.gov (United States)

    Marang, Leonie; van Loosdrecht, Mark C M; Kleerebezem, Robbert

    2015-12-01

    Although the enrichment of specialized microbial cultures for the production of polyhydroxyalkanoates (PHA) is generally performed in sequencing batch reactors (SBRs), the required feast-famine conditions can also be established using two or more continuous stirred-tank reactors (CSTRs) in series with partial biomass recirculation. The use of CSTRs offers several advantages, but will result in distributed residence times and a less strict separation between feast and famine conditions. The aim of this study was to investigate the impact of the reactor configuration, and various process and biomass-specific parameters, on the enrichment of PHA-producing bacteria. A set of mathematical models was developed to predict the growth of Plasticicumulans acidivorans-as a model PHA producer-in competition with a non-storing heterotroph. A macroscopic model considering lumped biomass and an agent-based model considering individual cells were created to study the effect of residence time distribution and the resulting distributed bacterial states. The simulations showed that in the 2-stage CSTR system the selective pressure for PHA-producing bacteria is significantly lower than in the SBR, and strongly affected by the chosen feast-famine ratio. This is the result of substrate competition based on both the maximum specific substrate uptake rate and substrate affinity. Although the macroscopic model overestimates the selective pressure in the 2-stage CSTR system, it provides a quick and fairly good impression of the reactor performance and the impact of process and biomass-specific parameters. © 2015 Wiley Periodicals, Inc.

  7. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  8. Time-resolved spectroscopy of nonequilibrium ionization in laser-produced plasmas

    International Nuclear Information System (INIS)

    Marjoribanks, R.S.

    1988-01-01

    The highly transient ionization characteristic of laser-produced plasmas at high energy densities has been investigated experimentally, using x-ray spectroscopy with time resolution of less than 20 ps. Spectroscopic diagnostics of plasma density and temperature were used, including line ratios, line profile broadening and continuum emission, to characterize the plasma conditions without relying immediately on ionization modeling. The experimentally measured plasma parameters were used as independent variables, driving an ionization code, as a test of ionization modeling, divorced from hydrodynamic calculations. Several state-of-the-art streak spectrographs, each recording a fiducial of the laser peak along with the time-resolved spectrum, characterized the laser heating of thin signature layers of different atomic numbers imbedded in plastic targets. A novel design of crystal spectrograph, with a conically curved crystal, was developed. Coupled with a streak camera, it provided high resolution (λ/ΔΛ > 1000) and a collection efficiency roughly 20-50 times that of planar crystal spectrographs, affording improved spectra for quantitative reduction and greater sensitivity for the diagnosis of weak emitters. Experimental results were compared to hydrocode and ionization code simulations, with poor agreement. The conclusions question the appropriateness of describing electron velocity distributions by a temperature parameter during the time of laser illumination and emphasis the importance of characterizing the distribution more generally

  9. Prime Time Power: Women Producers, Writers and Directors on TV.

    Science.gov (United States)

    Steenland, Sally

    This report analyzes the number of women working in the following six decision making jobs in prime time television: (1) executive producer; (2) supervising producer; (3) producer; (4) co-producer; (5) writer; and (6) director. The women who hold these positions are able to influence the portrayal of women on television as well as to improve the…

  10. Late-time particle emission from laser-produced graphite plasma

    Energy Technology Data Exchange (ETDEWEB)

    Harilal, S. S.; Hassanein, A.; Polek, M. [School of Nuclear Engineering, Center for Materials Under Extreme Environment, Purdue University, West Lafayette, Indiana 47907 (United States)

    2011-09-01

    We report a late-time ''fireworks-like'' particle emission from laser-produced graphite plasma during its evolution. Plasmas were produced using graphite targets excited with 1064 nm Nd: yttrium aluminum garnet (YAG) laser in vacuum. The time evolution of graphite plasma was investigated using fast gated imaging and visible emission spectroscopy. The emission dynamics of plasma is rapidly changing with time and the delayed firework-like emission from the graphite target followed a black-body curve. Our studies indicated that such firework-like emission is strongly depended on target material properties and explained due to material spallation caused by overheating the trapped gases through thermal diffusion along the layer structures of graphite.

  11. Late-time particle emission from laser-produced graphite plasma

    International Nuclear Information System (INIS)

    Harilal, S. S.; Hassanein, A.; Polek, M.

    2011-01-01

    We report a late-time ''fireworks-like'' particle emission from laser-produced graphite plasma during its evolution. Plasmas were produced using graphite targets excited with 1064 nm Nd: yttrium aluminum garnet (YAG) laser in vacuum. The time evolution of graphite plasma was investigated using fast gated imaging and visible emission spectroscopy. The emission dynamics of plasma is rapidly changing with time and the delayed firework-like emission from the graphite target followed a black-body curve. Our studies indicated that such firework-like emission is strongly depended on target material properties and explained due to material spallation caused by overheating the trapped gases through thermal diffusion along the layer structures of graphite.

  12. Modelling the oil producers: Capturing oil industry knowledge in a behavioural simulation model

    International Nuclear Information System (INIS)

    Morecroft, J.D.W.; Van der Heijden, K.A.J.M.

    1992-01-01

    A group of senior managers and planners from a major oil company met to discuss the changing structure of the oil industry with the purpose of improving group understanding of oil market behaviour for use in global scenarios. This broad ranging discussion led to a system dynamics simulation model of the oil producers. The model produced new insights into the power and stability of OPEC (the major oil producers' organization), the dynamic of oil prices, and the investment opportunities of non-OPEC producers. The paper traces the model development process, starting from group discussions and leading to working simulation models. Particular attention is paid to the methods used to capture team knowledge and to ensure that the computer models reflected opinions and ideas from the meetings. The paper describes how flip-chart diagrams were used to collect ideas about the logic of the principal producers' production decisions. A sub-group of the project team developed and tested an algebraic model. The paper shows partial model simulations used to build confidence and a sense of ownership in the algebraic formulations. Further simulations show how the full model can stimulate thinking about producers' behaviour and oil prices. The paper concludes with comments on the model building process. 11 figs., 37 refs

  13. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  14. One-dimensional modeling of thermal energy produced in a seismic fault

    Science.gov (United States)

    Konga, Guy Pascal; Koumetio, Fidèle; Yemele, David; Olivier Djiogang, Francis

    2017-12-01

    Generally, one observes an anomaly of temperature before a big earthquake. In this paper, we established the expression of thermal energy produced by friction forces between the walls of a seismic fault while considering the dynamic of a one-dimensional spring-block model. It is noted that, before the rupture of a seismic fault, displacements are caused by microseisms. The curves of variation of this thermal energy with time show that, for oscillatory and aperiodic displacement, the thermal energy is accumulated in the same way. The study reveals that thermal energy as well as temperature increases abruptly after a certain amount of time. We suggest that the corresponding time is the start of the anomaly of temperature observed which can be considered as precursory effect of a big seism. We suggest that the thermal energy can heat gases and dilate rocks until they crack. The warm gases can then pass through the cracks towards the surface. The cracks created by thermal energy can also contribute to the rupture of the seismic fault. We also suggest that the theoretical model of thermal energy, produced in seismic fault, associated with a large quantity of experimental data may help in the prediction of earthquakes.

  15. Real-time individualization of the unified model of performance.

    Science.gov (United States)

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  16. The use of synthetic input sequences in time series modeling

    International Nuclear Information System (INIS)

    Oliveira, Dair Jose de; Letellier, Christophe; Gomes, Murilo E.D.; Aguirre, Luis A.

    2008-01-01

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure

  17. A stochastic space-time model for intermittent precipitation occurrences

    KAUST Repository

    Sun, Ying; Stein, Michael L.

    2016-01-01

    Modeling a precipitation field is challenging due to its intermittent and highly scale-dependent nature. Motivated by the features of high-frequency precipitation data from a network of rain gauges, we propose a threshold space-time t random field (tRF) model for 15-minute precipitation occurrences. This model is constructed through a space-time Gaussian random field (GRF) with random scaling varying along time or space and time. It can be viewed as a generalization of the purely spatial tRF, and has a hierarchical representation that allows for Bayesian interpretation. Developing appropriate tools for evaluating precipitation models is a crucial part of the model-building process, and we focus on evaluating whether models can produce the observed conditional dry and rain probabilities given that some set of neighboring sites all have rain or all have no rain. These conditional probabilities show that the proposed space-time model has noticeable improvements in some characteristics of joint rainfall occurrences for the data we have considered.

  18. A stochastic space-time model for intermittent precipitation occurrences

    KAUST Repository

    Sun, Ying

    2016-01-28

    Modeling a precipitation field is challenging due to its intermittent and highly scale-dependent nature. Motivated by the features of high-frequency precipitation data from a network of rain gauges, we propose a threshold space-time t random field (tRF) model for 15-minute precipitation occurrences. This model is constructed through a space-time Gaussian random field (GRF) with random scaling varying along time or space and time. It can be viewed as a generalization of the purely spatial tRF, and has a hierarchical representation that allows for Bayesian interpretation. Developing appropriate tools for evaluating precipitation models is a crucial part of the model-building process, and we focus on evaluating whether models can produce the observed conditional dry and rain probabilities given that some set of neighboring sites all have rain or all have no rain. These conditional probabilities show that the proposed space-time model has noticeable improvements in some characteristics of joint rainfall occurrences for the data we have considered.

  19. Business Models and Producer-Owned Ventures: Choices, Challenges, and Changes

    OpenAIRE

    Kenkel, Philip L.; Park, John L.

    2007-01-01

    Producer-owned business models are rapidly evolving. Producer-owned, value-added ventures face a number of organizational challenges, including capital acquisition, security exchange registration, antitrust exemption, borrowing eligibility, and operational flexibility. This paper examines the success of evolving producer-owned business models in addressing these challenges. The need for uniform criteria to distinguish producer-owned business from other business forms throughout the complex st...

  20. Atmospheric modelling and prediction at time scales from days to seasons

    CSIR Research Space (South Africa)

    Landman, WA

    2010-09-01

    Full Text Available to seasonal forecasts, and produce multi-decadal climate change projections. This paper focuses on the shorter time-range from days to seasons. The conformal-cubic atmospheric model (CCAM) is an atmospheric global circulation model (AGCM) that can operate...

  1. Real-time Social Internet Data to Guide Forecasting Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Valle, Sara Y. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-20

    Our goal is to improve decision support by monitoring and forecasting events using social media, mathematical models, and quantifying model uncertainty. Our approach is real-time, data-driven forecasts with quantified uncertainty: Not just for weather anymore. Information flow from human observations of events through an Internet system and classification algorithms is used to produce quantitatively uncertain forecast. In summary, we want to develop new tools to extract useful information from Internet data streams, develop new approaches to assimilate real-time information into predictive models, validate approaches by forecasting events, and our ultimate goal is to develop an event forecasting system using mathematical approaches and heterogeneous data streams.

  2. Time-resolved soft x-ray spectra from laser-produced Cu plasma

    International Nuclear Information System (INIS)

    Cone, K.V.; Dunn, J.; Baldis, H.A.; May, M.J.; Purvis, M.A.; Scott, H.A.; Schneider, M.B.

    2012-01-01

    The volumetric heating of a thin copper target has been studied with time resolved x-ray spectroscopy. The copper target was heated from a plasma produced using the Lawrence Livermore National Laboratory's Compact Multipulse Terrawatt (COMET) laser. A variable spaced grating spectrometer coupled to an x-ray streak camera measured soft x-ray emission (800-1550 eV) from the back of the copper target to characterize the bulk heating of the target. Radiation hydrodynamic simulations were modeled in 2-dimensions using the HYDRA code. The target conditions calculated by HYDRA were post-processed with the atomic kinetics code CRETIN to generate synthetic emission spectra. A comparison between the experimental and simulated spectra indicates the presence of specific ionization states of copper and the corresponding electron temperatures and ion densities throughout the laser-heated copper target.

  3. A Model for Real-Time Data Reputation Via Cyber Telemetry

    Science.gov (United States)

    2016-06-01

    methodology which focuses on iterative cycles, known as sprints, to produce the capabilities of the system. Traditional waterfall models do not allow for...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited A MODEL FOR REAL...Master’s Thesis 4. TITLE AND SUBTITLE A MODEL FOR REAL-TIME DATA REPUTATION VIA CYBER TELEMETRY 5. FUNDING NUMBERS 6. AUTHOR(S) Beau M

  4. Models and analysis for multivariate failure time data

    Science.gov (United States)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the

  5. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  6. Modelling and Formal Verification of Timing Aspects in Large PLC Programs

    CERN Document Server

    Fernandez Adiego, B; Blanco Vinuela, E; Tournier, J-C; Gonzalez Suarez, V M; Blech, J O

    2014-01-01

    One of the main obstacle that prevents model checking from being widely used in industrial control systems is the complexity of building formal models out of PLC programs, especially when timing aspects need to be integrated. This paper brings an answer to this obstacle by proposing a methodology to model and verify timing aspects of PLC programs. Two approaches are proposed to allow the users to balance the trade-off between the complexity of the model, i.e. its number of states, and the set of specifications possible to be verified. A tool supporting the methodology which allows to produce models for different model checkers directly from PLC programs has been developed. Verification of timing aspects for real-life PLC programs are presented in this paper using NuSMV.

  7. Time-resolved probing of electron thermal conduction in femtosecond-laser-pulse-produced plasmas

    International Nuclear Information System (INIS)

    Vue, B.T.V.

    1993-06-01

    We present time-resolved measurements of reflectivity, transmissivity and frequency shifts of probe light interacting with the rear of a disk-like plasma produced by irradiation of a transparent solid target with 0.1ps FWHM laser pulses at peak intensity 5 x 10 l4 W/CM 2 . Experimental results show a large increase in reflection, revealing rapid formation of a steep gradient and overdense surface plasma layer during the first picosecond after irradiation. Frequency shifts due to a moving ionization created by thermal conduction into the solid target are recorded. Calculations using a nonlinear thermal heat wave model show good agreement with the measured frequency shifts, further confining the strong thermal transport effect

  8. Statistical modelling of space-time processes with application to wind power

    DEFF Research Database (Denmark)

    Lenzi, Amanda

    . This thesis aims at contributing to the wind power literature by building and evaluating new statistical techniques for producing forecasts at multiple locations and lead times using spatio-temporal information. By exploring the features of a rich portfolio of wind farms in western Denmark, we investigate...... propose spatial models for predicting wind power generation at two different time scales: for annual average wind power generation and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial...

  9. A Framework for Relating Timed Transition Systems and Preserving TCTL Model Checking

    DEFF Research Database (Denmark)

    Jacobsen, Lasse; Jacobsen, Morten; Møller, Mikael Harkjær

    2010-01-01

    Many formal translations between time dependent models have been proposed over the years. While some of them produce timed bisimilar models, others preserve only reachability or (weak) trace equivalence. We suggest a general framework for arguing when a translation preserves Timed Computation Tree...... Logic (TCTL) or its safety fragment.The framework works at the level of timed transition systems, making it independent of the modeling formalisms and applicable to many of the translations published in the literature. Finally, we present a novel translation from extended Timed-Arc Petri Nets...... to Networks of Timed Automata and using the framework argue that itpreserves the full TCTL. The translation has been implemented in the verification tool TAPAAL....

  10. Formation time of hadrons and density of matter produced in relativistic heavy-ion collisions

    International Nuclear Information System (INIS)

    Pisut, J.; Zavada, P.

    1994-06-01

    Densities of interacting hadronic matter produced in Oxygen-Lead and Sulphur-Lead collisions at 200 GeV/nucleon are estimated as a function of the formation time of hadrons. Uncertainties in our knowledge of the critical temperature T c and of the formation time of hadrons τ 0 permit at present three scenarios: an optimistic one (QGP has already been produced in collisions of Oxygen and Sulphur with heavy ions and will be copiously in Lead collisions), a pessimistic one (QGP cannot be produced at 200 GeV/nucleon) and an intermediate one (QGP has not been produced in Oxygen and Sulphur Interactions with heavy ions and will be at best produced only marginally in Pb-collisions). The last option is found to be the most probable. (author)

  11. Forecasting daily meteorological time series using ARIMA and regression models

    Science.gov (United States)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  12. Comparison between linear quadratic and early time dose models

    International Nuclear Information System (INIS)

    Chougule, A.A.; Supe, S.J.

    1993-01-01

    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  13. Analysis of time- and space-resolved Na-, Ne-, and F-like emission from a laser-produced bromine plasma

    International Nuclear Information System (INIS)

    Goldstein, W.H.; Young, B.K.F.; Osterheld, A.L.; Stewart, R.E.; Walling, R.S.; Bar-Shalom, A.

    1991-01-01

    Advances in the efficiency and accuracy of computational atomic physics and collisional radiative modeling promise to place the analysis and diagnostic application of L-shell emission on a par with the simpler K-shell regime. Coincident improvements in spectroscopic plasma measurements yield optically thin emission spectra from small, homogeneous regions of plasma, localized both in space and time. Together, these developments can severely test models for high-density, high-temperature plasma formation and evolution, and non-LTE atomic kinetics. In this paper we present highly resolved measurements of n=3 to n=2 X-ray line emission from a laser-produced bromine micro plasma. The emission is both space- and time-resolved, allowing us to apply simple, steady-state, 0-dimensional spectroscopic models to the analysis. These relativistic, multi-configurational, distorted wave collisional-radiative models were created using the HULLAC atomic physics package. Using these models, we have analyzed the F-like, Ne-like and Na-like (satellite) spectra with respect to temperature, density and charge-state distribution. This procedure leads to a full characterization of the plasma conditions. 9 refs., 3 figs

  14. RTMOD: Real-Time MODel evaluation

    International Nuclear Information System (INIS)

    Graziani, G; Galmarini, S.; Mikkelsen, T.

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  15. A novel methodology to model the cooling processes of packed horticultural produce using 3D shape models

    Science.gov (United States)

    Gruyters, Willem; Verboven, Pieter; Rogge, Seppe; Vanmaercke, Simon; Ramon, Herman; Nicolai, Bart

    2017-10-01

    Freshly harvested horticultural produce require a proper temperature management to maintain their high economic value. Towards this end, low temperature storage is of crucial importance to maintain a high product quality. Optimizing both the package design of packed produce and the different steps in the postharvest cold chain can be achieved by numerical modelling of the relevant transport phenomena. This work presents a novel methodology to accurately model both the random filling of produce in a package and the subsequent cooling process. First, a cultivar-specific database of more than 100 realistic CAD models of apple and pear fruit is built with a validated geometrical 3D shape model generator. To have an accurate representation of a realistic picking season, the model generator also takes into account the biological variability of the produce shape. Next, a discrete element model (DEM) randomly chooses surface meshed bodies from the database to simulate the gravitational filling process of produce in a box or bin, using actual mechanical properties of the fruit. A computational fluid dynamics (CFD) model is then developed with the final stacking arrangement of the produce to study the cooling efficiency of packages under several conditions and configurations. Here, a typical precooling operation is simulated to demonstrate the large differences between using actual 3D shapes of the fruit and an equivalent spheres approach that simplifies the problem drastically. From this study, it is concluded that using a simplified representation of the actual fruit shape may lead to a severe overestimation of the cooling behaviour.

  16. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  17. Real time detection of farm-level swine mycobacteriosis outbreak using time series modeling of the number of condemned intestines in abattoirs.

    Science.gov (United States)

    Adachi, Yasumoto; Makita, Kohei

    2015-09-01

    Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.

  18. Larger Neural Responses Produce BOLD Signals That Begin Earlier in Time

    Directory of Open Access Journals (Sweden)

    Serena eThompson

    2014-06-01

    Full Text Available Functional MRI analyses commonly rely on the assumption that the temporal dynamics of hemodynamic response functions (HRFs are independent of the amplitude of the neural signals that give rise to them. The validity of this assumption is particularly important for techniques that use fMRI to resolve sub-second timing distinctions between responses, in order to make inferences about the ordering of neural processes. Whether or not the detailed shape of the HRF is independent of neural response amplitude remains an open question, however. We performed experiments in which we measured responses in primary visual cortex (V1 to large, contrast-reversing checkerboards at a range of contrast levels, which should produce varying amounts of neural activity. Ten subjects (ages 22-52 were studied in each of two experiments using 3 Tesla scanners. We used rapid, 250 msec, temporal sampling (repetition time, or TR and both short and long inter-stimulus interval (ISI stimulus presentations. We tested for a systematic relationship between the onset of the HRF and its amplitude across conditions, and found a strong negative correlation between the two measures when stimuli were separated in time (long- and medium-ISI experiments, but not the short-ISI experiment. Thus, stimuli that produce larger neural responses, as indexed by HRF amplitude, also produced HRFs with shorter onsets. The relationship between amplitude and latency was strongest in voxels with lowest mean-normalized variance (i.e., parenchymal voxels. The onset differences observed in the longer-ISI experiments are likely attributable to mechanisms of neurovascular coupling, since they are substantially larger than reported differences in the onset of action potentials in V1 as a function of response amplitude.

  19. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  20. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  1. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    Science.gov (United States)

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  2. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    Science.gov (United States)

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  3. Mediation Analysis with Survival Outcomes: Accelerated Failure Time Versus Proportional Hazards Models

    Directory of Open Access Journals (Sweden)

    Lois A Gelfand

    2016-03-01

    Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  4. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  5. Bayesian dynamic modeling of time series of dengue disease case counts.

    Science.gov (United States)

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful

  6. A Modeling Framework for Predicting the Size of Sediments Produced on Hillslopes and Supplied to Channels

    Science.gov (United States)

    Sklar, L. S.; Mahmoudi, M.

    2016-12-01

    Landscape evolution models rarely represent sediment size explicitly, despite the importance of sediment size in regulating rates of bedload sediment transport, river incision into bedrock, and many other processes in channels and on hillslopes. A key limitation has been the lack of a general model for predicting the size of sediments produced on hillslopes and supplied to channels. Here we present a framework for such a model, as a first step toward building a `geomorphic transport law' that balances mechanistic realism with computational simplicity and is widely applicable across diverse landscapes. The goal is to take as inputs landscape-scale boundary conditions such as lithology, climate and tectonics, and predict the spatial variation in the size distribution of sediments supplied to channels across catchments. The model framework has two components. The first predicts the initial size distribution of particles produced by erosion of bedrock underlying hillslopes, while the second accounts for the effects of physical and chemical weathering during transport down slopes and delivery to channels. The initial size distribution can be related to the spacing and orientation of fractures within bedrock, which depend on the stresses and deformation experienced during exhumation and on rock resistance to fracture propagation. Other controls on initial size include the sizes of mineral grains in crystalline rocks, the sizes of cemented particles in clastic sedimentary rocks, and the potential for characteristic size distributions produced by tree throw, frost cracking, and other erosional processes. To model how weathering processes transform the initial size distribution we consider the effects of erosion rate and the thickness of soil and weathered bedrock on hillslope residence time. Residence time determines the extent of size reduction, for given values of model terms that represent the potential for chemical and physical weathering. Chemical weathering potential

  7. A Monte Carlo model to produce baryons in e+e- annihilation

    International Nuclear Information System (INIS)

    Meyer, T.

    1981-08-01

    A simple model is described extending the Field-Feynman model to baryon production in quark fragmentation. The model predicts baryon baryon correlations within jets and in opposite jets produced in electron-positron annihilation. Existing data is well described by the model. (orig.)

  8. Vibration analysis diagnostics by continuous-time models: A case study

    International Nuclear Information System (INIS)

    Pedregal, Diego J.; Carmen Carnero, Ma.

    2009-01-01

    In this paper a forecasting system in condition monitoring is developed based on vibration signals in order to improve the diagnosis of a certain critical equipment at an industrial plant. The system is based on statistical models capable of forecasting the state of the equipment combined with a cost model consisting of defining the time of preventive replacement when the minimum of the expected cost per unit of time is reached in the future. The most relevant features of the system are that (i) it is developed for bivariate signals; (ii) the statistical models are set up in a continuous-time framework, due to the specific nature of the data; and (iii) it has been developed from scratch for a real case study and may be generalised to other pieces of equipment. The system is thoroughly tested on the equipment available, showing its correctness with the data in a statistical sense and its capability of producing sensible results for the condition monitoring programme

  9. Vibration analysis diagnostics by continuous-time models: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Pedregal, Diego J. [Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: Diego.Pedregal@uclm.es; Carmen Carnero, Ma. [Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: Carmen.Carnero@uclm.es

    2009-02-15

    In this paper a forecasting system in condition monitoring is developed based on vibration signals in order to improve the diagnosis of a certain critical equipment at an industrial plant. The system is based on statistical models capable of forecasting the state of the equipment combined with a cost model consisting of defining the time of preventive replacement when the minimum of the expected cost per unit of time is reached in the future. The most relevant features of the system are that (i) it is developed for bivariate signals; (ii) the statistical models are set up in a continuous-time framework, due to the specific nature of the data; and (iii) it has been developed from scratch for a real case study and may be generalised to other pieces of equipment. The system is thoroughly tested on the equipment available, showing its correctness with the data in a statistical sense and its capability of producing sensible results for the condition monitoring programme.

  10. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  11. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  12. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  13. Using stochastic space-time models to map extreme precipitation in southern Portugal

    Directory of Open Access Journals (Sweden)

    A. C. Costa

    2008-07-01

    Full Text Available The topographic characteristics and spatial climatic diversity are significant in the South of continental Portugal where the rainfall regime is typically Mediterranean. Direct sequential cosimulation is proposed for mapping an extreme precipitation index in southern Portugal using elevation as auxiliary information. The analysed index (R5D can be considered a flood indicator because it provides a measure of medium-term precipitation total. The methodology accounts for local data variability and incorporates space-time models that allow capturing long-term trends of extreme precipitation, and local changes in the relationship between elevation and extreme precipitation through time. Annual gridded datasets of the flood indicator are produced from 1940 to 1999 on 800 m×800 m grids by using the space-time relationship between elevation and the index. Uncertainty evaluations of the proposed scenarios are also produced for each year. The results indicate that the relationship between elevation and extreme precipitation varies locally and has decreased through time over the study region. In wetter years the flood indicator exhibits the highest values in mountainous regions of the South, while in drier years the spatial pattern of extreme precipitation has much less variability over the study region. The uncertainty of extreme precipitation estimates also varies in time and space, and in earlier decades is strongly dependent on the density of the monitoring stations network. The produced maps will be useful in regional and local studies related to climate change, desertification, land and water resources management, hydrological modelling, and flood mitigation planning.

  14. Numerical modeling for saturated-zone groundwater travel time analysis at Yucca Mountain

    International Nuclear Information System (INIS)

    Arnold, B.W.; Barr, G.E.

    1996-01-01

    A three-dimensional, site-scale numerical model of groundwater flow in the saturated zone at Yucca Mountain was constructed and linked to particle tracking simulations to produce an estimate of the distribution of groundwater travel times from the potential repository to the boundary of the accessible environment. This effort and associated modeling of groundwater travel times in the unsaturated zone were undertaken to aid in the evaluation of compliance of the site with 10CFR960. These regulations stipulate that pre-waste-emplacement groundwater travel time to the accessible environment shall exceed 1,000 years along any path of likely and significant radionuclide travel

  15. Time-resolved energy spectrum of a pseudospark-produced high-brightness electron beam

    International Nuclear Information System (INIS)

    Myers, T.J.; Ding, B.N.; Rhee, M.J.

    1992-01-01

    The pseudospark, a fast low-pressure gas discharge between a hollow cathode and a planar anode, is found to be an interesting high-brightness electron beam source. Typically, all electron beam produced in the pseudospark has the peak current of ∼1 kA, pulse duration of ∼50 ns, and effective emittance of ∼100 mm-mrad. The energy information of this electron beam, however, is least understood due to the difficulty of measuring a high-current-density beam that is partially space-charge neutralized by the background ions produced in the gas. In this paper, an experimental study of the time-resolved energy spectrum is presented. The pseudospark produced electron beam is injected into a vacuum through a small pinhole so that the electrons without background ions follow single particle motion; the beam is sent through a negative biased electrode and the only portion of beam whose energy is greater than the bias voltage can pass through the electrode and the current is measured by a Faraday cup. The Faraday cup signals with various bias voltage are recorded in a digital oscilloscope. The recorded waveforms are then numerically analyzed to construct a time-resolved energy spectrum. Preliminary results are presented

  16. Confirmation and calibration of computer modeling of tsunamis produced by Augustine volcano, Alaska

    Science.gov (United States)

    Beget, James E.; Kowalik, Zygmunt

    2006-01-01

    Numerical modeling has been used to calculate the characteristics of a tsunami generated by a landslide into Cook Inlet from Augustine Volcano. The modeling predicts travel times of ca. 50-75 minutes to the nearest populated areas, and indicates that significant wave amplification occurs near Mt. Iliamna on the western side of Cook Inlet, and near the Nanwelak and the Homer-Anchor Point areas on the east side of Cook Inlet. Augustine volcano last produced a tsunami during an eruption in 1883, and field evidence of the extent and height of the 1883 tsunamis can be used to test and constrain the results of the computer modeling. Tsunami deposits on Augustine Island indicate waves near the landslide source were more than 19 m high, while 1883 tsunami deposits in distal sites record waves 6-8 m high. Paleotsunami deposits were found at sites along the coast near Mt. Iliamna, Nanwelak, and Homer, consistent with numerical modeling indicating significant tsunami wave amplification occurs in these areas. 

  17. Evaluation of digital model accuracy and time-dependent deformation of alginate impressions.

    Science.gov (United States)

    Cesur, M G; Omurlu, I K; Ozer, T

    2017-09-01

    The aim of this study was to evaluate the accuracy of digital models produced with the three-dimensional dental scanner, and to test the dimensional stability of alginate impressions for durations of immediately (T0), 1 day (T1), and 2 days (T2). A total of sixty impressions were taken from a master model with an alginate, and were poured into plaster models in three different storage periods. Twenty impressions were directly scanned (negative digital models), after which plaster models were poured and scanned (positive digital models) immediately. The remaining 40 impressions were poured after 1 and 2 days. In total, 9 points and 11 linear measurements were used to analyze the plaster models, and negative and positive digital models. Time-dependent deformation of the alginate impressions and the accuracy of the conventional plaster models and digital models were evaluated separately. Plaster models, negative and positive digital models showed significant differences in nearly all measurements at T (0), T (1), and T (2) times (P 0.05), but they demonstrated statistically significant differences at T (2) time (P impressions is practicable method for orthodontists.

  18. Comments on a time-dependent version of the linear-quadratic model

    International Nuclear Information System (INIS)

    Tucker, S.L.; Travis, E.L.

    1990-01-01

    The accuracy and interpretation of the 'LQ + time' model are discussed. Evidence is presented, based on data in the literature, that this model does not accurately describe the changes in isoeffect dose occurring with protraction of the overall treatment time during fractionated irradiation of the lung. This lack of fit of the model explains, in part, the surprisingly large values of γ/α that have been derived from experimental lung data. The large apparent time factors for lung suggested by the model are also partly explained by the fact that γT/α, despite having units of dose, actually measures the influence of treatment time on the effect scale, not the dose scale, and is shown to consistently overestimate the change in total dose. The unusually high values of α/β that have been derived for lung using the model are shown to be influenced by the method by which the model was fitted to data. Reanalyses of the data using a more statistically valid regression procedure produce estimates of α/β more typical of those usually cited for lung. Most importantly, published isoeffect data from lung indicate that the true deviation from the linear-quadratic (LQ) model is nonlinear in time, instead of linear, and also depends on other factors such as the effect level and the size of dose per fraction. Thus, the authors do not advocate the use of the 'LQ + time' expression as a general isoeffect model. (author). 32 refs.; 3 figs.; 1 tab

  19. 3D Modelling and Printing Technology to Produce Patient-Specific 3D Models.

    Science.gov (United States)

    Birbara, Nicolette S; Otton, James M; Pather, Nalini

    2017-11-10

    A comprehensive knowledge of mitral valve (MV) anatomy is crucial in the assessment of MV disease. While the use of three-dimensional (3D) modelling and printing in MV assessment has undergone early clinical evaluation, the precision and usefulness of this technology requires further investigation. This study aimed to assess and validate 3D modelling and printing technology to produce patient-specific 3D MV models. A prototype method for MV 3D modelling and printing was developed from computed tomography (CT) scans of a plastinated human heart. Mitral valve models were printed using four 3D printing methods and validated to assess precision. Cardiac CT and 3D echocardiography imaging data of four MV disease patients was used to produce patient-specific 3D printed models, and 40 cardiac health professionals (CHPs) were surveyed on the perceived value and potential uses of 3D models in a clinical setting. The prototype method demonstrated submillimetre precision for all four 3D printing methods used, and statistical analysis showed a significant difference (p3D printed models, particularly using multiple print materials, were considered useful by CHPs for preoperative planning, as well as other applications such as teaching and training. This study suggests that, with further advances in 3D modelling and printing technology, patient-specific 3D MV models could serve as a useful clinical tool. The findings also highlight the potential of this technology to be applied in a variety of medical areas within both clinical and educational settings. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  20. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  1. Adaptive time-variant models for fuzzy-time-series forecasting.

    Science.gov (United States)

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  2. Test of models for electron transport in laser produced plasmas

    International Nuclear Information System (INIS)

    Colombant, D.G.; Manheimer, W.M.; Busquet, M.

    2005-01-01

    This paper examines five different models of electron thermal transport in laser produced spherical implosions. These are classical, classical with a flux limit f, delocalization, beam deposition model, and Fokker-Planck solutions. In small targets, the results are strongly dependent on f for flux limit models, with small f's generating very steep temperature gradients. Delocalization models are characterized by large preheat in the center of the target. The beam deposition model agrees reasonably well with the Fokker-Planck simulation results. For large, high gain fusion targets, the delocalization model shows the gain substantially reduced by the preheat. However, flux limitation models show gain largely independent of f, with the beam deposition model also showing the same high gain

  3. Model Checking Real-Time Systems

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  4. A comparison of cosmological models using time delay lenses

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Jun-Jie; Wu, Xue-Feng; Melia, Fulvio, E-mail: jjwei@pmo.ac.cn, E-mail: xfwu@pmo.ac.cn, E-mail: fmelia@email.arizona.edu [Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008 (China)

    2014-06-20

    The use of time-delay gravitational lenses to examine the cosmological expansion introduces a new standard ruler with which to test theoretical models. The sample suitable for this kind of work now includes 12 lens systems, which have thus far been used solely for optimizing the parameters of ΛCDM. In this paper, we broaden the base of support for this new, important cosmic probe by using these observations to carry out a one-on-one comparison between competing models. The currently available sample indicates a likelihood of ∼70%-80% that the R {sub h} = ct universe is the correct cosmology versus ∼20%-30% for the standard model. This possibly interesting result reinforces the need to greatly expand the sample of time-delay lenses, e.g., with the successful implementation of the Dark Energy Survey, the VST ATLAS survey, and the Large Synoptic Survey Telescope. In anticipation of a greatly expanded catalog of time-delay lenses identified with these surveys, we have produced synthetic samples to estimate how large they would have to be in order to rule out either model at a ∼99.7% confidence level. We find that if the real cosmology is ΛCDM, a sample of ∼150 time-delay lenses would be sufficient to rule out R {sub h} = ct at this level of accuracy, while ∼1000 time-delay lenses would be required to rule out ΛCDM if the real universe is instead R {sub h} = ct. This difference in required sample size reflects the greater number of free parameters available to fit the data with ΛCDM.

  5. A comparison of cosmological models using time delay lenses

    International Nuclear Information System (INIS)

    Wei, Jun-Jie; Wu, Xue-Feng; Melia, Fulvio

    2014-01-01

    The use of time-delay gravitational lenses to examine the cosmological expansion introduces a new standard ruler with which to test theoretical models. The sample suitable for this kind of work now includes 12 lens systems, which have thus far been used solely for optimizing the parameters of ΛCDM. In this paper, we broaden the base of support for this new, important cosmic probe by using these observations to carry out a one-on-one comparison between competing models. The currently available sample indicates a likelihood of ∼70%-80% that the R h = ct universe is the correct cosmology versus ∼20%-30% for the standard model. This possibly interesting result reinforces the need to greatly expand the sample of time-delay lenses, e.g., with the successful implementation of the Dark Energy Survey, the VST ATLAS survey, and the Large Synoptic Survey Telescope. In anticipation of a greatly expanded catalog of time-delay lenses identified with these surveys, we have produced synthetic samples to estimate how large they would have to be in order to rule out either model at a ∼99.7% confidence level. We find that if the real cosmology is ΛCDM, a sample of ∼150 time-delay lenses would be sufficient to rule out R h = ct at this level of accuracy, while ∼1000 time-delay lenses would be required to rule out ΛCDM if the real universe is instead R h = ct. This difference in required sample size reflects the greater number of free parameters available to fit the data with ΛCDM.

  6. Effect of temperature and hydraulic retention time on hydrogen producing granules: Homoacetogenesis and morphological characteristics

    International Nuclear Information System (INIS)

    Abreu, A. A.; Danko, A. S.; Alves, M. M.

    2009-01-01

    The effect of temperature and hydraulic retention time (HRT) on the homoacetogenesisi and on the morphological characteristics of hydrogen producing granules was investigated. Hydrogen was produced using an expanded granular sludge blanket (EGSB) reactor, fed with glucose and L-arabinose, under mesophilic (37 degree centigrade), thermophilic (55 degree centigrade), and hyper thermophilic (70 degree centigrade) conditions. (Author)

  7. Features of the use of time-frequency distributions for controlling the mixture-producing aggregate

    Science.gov (United States)

    Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.

    2018-05-01

    The paper submits and argues the information on filtering properties of the mixing unit as a part of the mixture-producing aggregate. Relevant theoretical data concerning a channel transfer function of the mixing unit and multidimensional material flow signals are adduced here. Note that ordinary one-dimensional material flow signals are defined in terms of time-frequency distributions of Cohen’s class representations operating with Gabor wavelet functions. Two time-frequencies signal representations are written about in the paper to show how one can solve controlling problems as applied to mixture-producing systems: they are the so-called Rihaczek and Wigner-Ville distributions. In particular, the latter illustrates low-pass filtering properties that are practically available in any of low-pass elements of a physical system.

  8. Real time natural object modeling framework

    International Nuclear Information System (INIS)

    Rana, H.A.; Shamsuddin, S.M.; Sunar, M.H.

    2008-01-01

    CG (Computer Graphics) is a key technology for producing visual contents. Currently computer generated imagery techniques are being developed and applied, particularly in the field of virtual reality applications, film production, training and flight simulators, to provide total composition of realistic computer graphic images. Natural objects like clouds are an integral feature of the sky without them synthetic outdoor scenes seem unrealistic. Modeling and animating such objects is a difficult task. Most systems are difficult to use, as they require adjustment of numerous, complex parameters and are non-interactive. This paper presents an intuitive, interactive system to artistically model, animate, and render visually convincing clouds using modern graphics hardware. A high-level interface models clouds through the visual use of cubes. Clouds are rendered by making use of hardware accelerated API -OpenGL. The resulting interactive design and rendering system produces perceptually convincing cloud models that can be used in any interactive system. (author)

  9. Functional copmponents produced by multi-jet modelling combined with electroforming and machining

    Directory of Open Access Journals (Sweden)

    Baier, Oliver

    2014-08-01

    Full Text Available In fuel cell technology, certain components are used that are responsible for guiding liquid media. When these components are produced by conventional manufacturing, there are often sealing issues, and trouble- and maintenance-free deployment cannot be ensured. Against this background, a new process combination has been developed in a joint project between the University of Duisburg-Essen, the Center for Fuel Cell Technology (ZBT, and the company Galvano-T electroplating forming GmbH. The approach is to combine multi-jet modelling (MJM, electroforming and milling in order to produce a defined external geometry. The wax models are generated on copper base plates and copper-coated to a desirable thickness. Following this, the undefined electroplated surfaces are machined to achieve the desired measurement, and the wax is melted out. This paper presents, first, how this process is technically feasible, then describes how the MJM on a 3-D Systems ThermoJet was adapted to stabilise the process.In the AiF-sponsored ZIM project, existing limits and possibilities are shown and different approaches of electroplating are investigated. This paper explores whether or not activation of the wax structure by a conductive initial layer is required. Using the described process chain, different parts were built: a heat exchanger, a vaporiser, and a reformer (in which pellets were integrated in an intermediate step. In addition, multiple-layer parts with different functions were built by repeating the process combination several times.

  10. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    Science.gov (United States)

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  11. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  12. Basic Investigations of Dynamic Travel Time Estimation Model for Traffic Signals Control Using Information from Optical Beacons

    Science.gov (United States)

    Okutani, Iwao; Mitsui, Tatsuro; Nakada, Yusuke

    In this paper put forward are neuron-type models, i.e., neural network model, wavelet neuron model and three layered wavelet neuron model(WV3), for estimating traveling time between signalized intersections in order to facilitate adaptive setting of traffic signal parameters such as green time and offset. Model validation tests using simulated data reveal that compared to other models, WV3 model works very fast in learning process and can produce more accurate estimates of travel time. Also, it is exhibited that up-link information obtainable from optical beacons, i.e., travel time observed during the former cycle time in this case, makes a crucial input variable to the models in that there isn't any substantial difference between the change of estimated and simulated travel time with the change of green time or offset when up-link information is employed as input while there appears big discrepancy between them when not employed.

  13. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  14. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  15. Can producer currency pricing models generate volatile real exchange rates?

    OpenAIRE

    Povoledo, L.

    2012-01-01

    If the elasticities of substitution between traded and nontraded and between Home and Foreign traded goods are sufficiently low, then the real exchange rate generated by a model with full producer currency pricing is as volatile as in the data.

  16. Real-time process optimization based on grey-box neural models

    Directory of Open Access Journals (Sweden)

    F. A. Cubillos

    2007-09-01

    Full Text Available This paper investigates the feasibility of using grey-box neural models (GNM in Real Time Optimization (RTO. These models are based on a suitable combination of fundamental conservation laws and neural networks, being used in at least two different ways: to complement available phenomenological knowledge with empirical information, or to reduce dimensionality of complex rigorous physical models. We have observed that the benefits of using these simple adaptable models are counteracted by some difficulties associated with the solution of the optimization problem. Nonlinear Programming (NLP algorithms failed in finding the global optimum due to the fact that neural networks can introduce multimodal objective functions. One alternative considered to solve this problem was the use of some kind of evolutionary algorithms, like Genetic Algorithms (GA. Although these algorithms produced better results in terms of finding the appropriate region, they took long periods of time to reach the global optimum. It was found that a combination of genetic and nonlinear programming algorithms can be use to fast obtain the optimum solution. The proposed approach was applied to the Williams-Otto reactor, considering three different GNM models of increasing complexity. Results demonstrated that the use of GNM models and mixed GA/NLP optimization algorithms is a promissory approach for solving dynamic RTO problems.

  17. Real-time modeling of complex atmospheric releases in urban areas

    International Nuclear Information System (INIS)

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    1994-08-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralized dispersion modeling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models

  18. Real-time modelling of complex atmospheric releases in urban areas

    International Nuclear Information System (INIS)

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    2000-01-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralised dispersion modelling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models. (author)

  19. Identification of neutral biochemical network models from time series data.

    Science.gov (United States)

    Vilela, Marco; Vinga, Susana; Maia, Marco A Grivet Mattoso; Voit, Eberhard O; Almeida, Jonas S

    2009-05-05

    The major difficulty in modeling biological systems from multivariate time series is the identification of parameter sets that endow a model with dynamical behaviors sufficiently similar to the experimental data. Directly related to this parameter estimation issue is the task of identifying the structure and regulation of ill-characterized systems. Both tasks are simplified if the mathematical model is canonical, i.e., if it is constructed according to strict guidelines. In this report, we propose a method for the identification of admissible parameter sets of canonical S-systems from biological time series. The method is based on a Monte Carlo process that is combined with an improved version of our previous parameter optimization algorithm. The method maps the parameter space into the network space, which characterizes the connectivity among components, by creating an ensemble of decoupled S-system models that imitate the dynamical behavior of the time series with sufficient accuracy. The concept of sloppiness is revisited in the context of these S-system models with an exploration not only of different parameter sets that produce similar dynamical behaviors but also different network topologies that yield dynamical similarity. The proposed parameter estimation methodology was applied to actual time series data from the glycolytic pathway of the bacterium Lactococcus lactis and led to ensembles of models with different network topologies. In parallel, the parameter optimization algorithm was applied to the same dynamical data upon imposing a pre-specified network topology derived from prior biological knowledge, and the results from both strategies were compared. The results suggest that the proposed method may serve as a powerful exploration tool for testing hypotheses and the design of new experiments.

  20. Modeling of Bacillus cereus distribution in pasteurized milk at the time of consumption

    Directory of Open Access Journals (Sweden)

    Ľubomír Valík

    2013-02-01

    Full Text Available Normal 0 21 false false false SK X-NONE X-NONE Modelling of Bacillus cereus distribution, using data from pasteurized milk produced in Slovakia, at the time of consumption was performed in this study. The Modular Process Risk Model (MPRM methodology was applied to over all the consecutive steps in the food chain. The main factors involved in the risk of being exposed to unacceptable levels of B. cereus (model output were the initial density of B. cereus after milk pasteurization, storage temperatures and times (model input. Monte Carlo simulations were used for probability calculation of B. cereus density. By applying the sensitivity analysis influence of the input factors and their threshold values on the final count of B. cereus were determined. The results of the general case exposure assessment indicated that almost 14 % of Tetra Brik cartons can contain > 104 cfu/ml of B. cereus at the temperature distribution taken into account and time of pasteurized milk consumption. doi:10.5219/264

  1. Exploring oil market dynamics: a system dynamics model and microworld of the oil producers

    Energy Technology Data Exchange (ETDEWEB)

    Morecroft, J.D.W. [London Business School (United Kingdom); Marsh, B. [St Andrews Management Institute, Fife (United Kingdom)

    1997-11-01

    This chapter focuses on the development of a simulation model of global oil markets by Royal Dutch/Shell Planners in order to explore the implications of different scenarios. The model development process, mapping the decision making logic of the oil producers, the swing producer making enough to defend the intended price, the independents, quota setting, the opportunists, and market oil price and demand are examined. Use of the model to generate scenarios development of the model as a gaming simulator for training, design of the user interface, and the value of the model are considered in detail. (UK)

  2. Modeling of phosphorus fluxes produced by wild fires at watershed scales.

    Science.gov (United States)

    Matyjasik, M.; Hernandez, M.; Shaw, N.; Baker, M.; Fowles, M. T.; Cisney, T. A.; Jex, A. P.; Moisen, G.

    2017-12-01

    River runoff is one of the controlling processes in the terrestrial phosphorus cycle. Phosphorus is often a limiting factor in fresh water. One of the factors that has not been studied and modeled in detail is phosporus flux produced from forest wild fires. Phosphate released by weathering is quickly absorbed in soils. Forest wild fires expose barren soils to intensive erosion, thus releasing relatively large fluxes of phosphorus. Measurements from three control burn sites were used to correlate erosion with phosphorus fluxes. These results were used to model phosphorus fluxes from burned watersheds during a five year long period after fires occurred. Erosion in our model is simulated using a combination of two models: the WEPP (USDA Water Erosion Prediction Project) and the GeoWEPP (GIS-based Water Erosion Prediction Project). Erosion produced from forest disturbances is predicted for any watershed using hydrologic, soil, and meteorological data unique to the individual watersheds or individual slopes. The erosion results are modified for different textural soil classes and slope angles to model fluxes of phosphorus. The results of these models are calibrated using measured concentrations of phosphorus for three watersheds located in the Interior Western United States. The results will help the United States Forest Service manage phosporus fluxes in national forests.

  3. Extending flood forecasting lead time in a large watershed by coupling WRF QPF with a distributed hydrological model

    Science.gov (United States)

    Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen

    2017-03-01

    Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km  × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.

  4. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  5. Patient specific dynamic geometric models from sequential volumetric time series image data.

    Science.gov (United States)

    Cameron, B M; Robb, R A

    2004-01-01

    Generating patient specific dynamic models is complicated by the complexity of the motion intrinsic and extrinsic to the anatomic structures being modeled. Using a physics-based sequentially deforming algorithm, an anatomically accurate dynamic four-dimensional model can be created from a sequence of 3-D volumetric time series data sets. While such algorithms may accurately track the cyclic non-linear motion of the heart, they generally fail to accurately track extrinsic structural and non-cyclic motion. To accurately model these motions, we have modified a physics-based deformation algorithm to use a meta-surface defining the temporal and spatial maxima of the anatomic structure as the base reference surface. A mass-spring physics-based deformable model, which can expand or shrink with the local intrinsic motion, is applied to the metasurface, deforming this base reference surface to the volumetric data at each time point. As the meta-surface encompasses the temporal maxima of the structure, any extrinsic motion is inherently encoded into the base reference surface and allows the computation of the time point surfaces to be performed in parallel. The resultant 4-D model can be interactively transformed and viewed from different angles, showing the spatial and temporal motion of the anatomic structure. Using texture maps and per-vertex coloring, additional data such as physiological and/or biomechanical variables (e.g., mapping electrical activation sequences onto contracting myocardial surfaces) can be associated with the dynamic model, producing a 5-D model. For acquisition systems that may capture only limited time series data (e.g., only images at end-diastole/end-systole or inhalation/exhalation), this algorithm can provide useful interpolated surfaces between the time points. Such models help minimize the number of time points required to usefully depict the motion of anatomic structures for quantitative assessment of regional dynamics.

  6. Modeling nonstationarity in space and time.

    Science.gov (United States)

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  7. Modeling real-time balancing power demands in wind power systems using stochastic differential equations

    International Nuclear Information System (INIS)

    Olsson, Magnus; Perninge, Magnus; Soeder, Lennart

    2010-01-01

    The inclusion of wind power into power systems has a significant impact on the demand for real-time balancing power due to the stochastic nature of wind power production. The overall aim of this paper is to present probabilistic models of the impact of large-scale integration of wind power on the continuous demand in MW for real-time balancing power. This is important not only for system operators, but also for producers and consumers since they in most systems through various market solutions provide balancing power. Since there can occur situations where the wind power variations cancel out other types of deviations in the system, models on an hourly basis are not sufficient. Therefore the developed model is in continuous time and is based on stochastic differential equations (SDE). The model can be used within an analytical framework or in Monte Carlo simulations. (author)

  8. Inoculum effect on the efficacies of amoxicillin-clavulanate, piperacillin-tazobactam, and imipenem against extended-spectrum β-lactamase (ESBL)-producing and non-ESBL-producing Escherichia coli in an experimental murine sepsis model.

    Science.gov (United States)

    Docobo-Pérez, F; López-Cerero, L; López-Rojas, R; Egea, P; Domínguez-Herrera, J; Rodríguez-Baño, J; Pascual, A; Pachón, J

    2013-05-01

    Escherichia coli is commonly involved in infections with a heavy bacterial burden. Piperacillin-tazobactam and carbapenems are among the recommended empirical treatments for health care-associated complicated intra-abdominal infections. In contrast to amoxicillin-clavulanate, both have reduced in vitro activity in the presence of high concentrations of extended-spectrum β-lactamase (ESBL)-producing and non-ESBL-producing E. coli bacteria. Our goal was to compare the efficacy of these antimicrobials against different concentrations of two clinical E. coli strains, one an ESBL-producer and the other a non-ESBL-producer, in a murine sepsis model. An experimental sepsis model {~5.5 log10 CFU/g [low inoculum concentration (LI)] or ~7.5 log(10) CFU/g [high inoculum concentration (HI)]} using E. coli strains ATCC 25922 (non-ESBL producer) and Ec1062 (CTX-M-14 producer), which are susceptible to the three antimicrobials, was used. Amoxicillin-clavulanate (50/12.5 mg/kg given intramuscularly [i.m.]), piperacillin-tazobactam (25/3.125 mg/kg given intraperitoneally [i.p.]), and imipenem (30 mg/kg i.m.) were used. Piperacillin-tazobactam and imipenem reduced spleen ATCC 25922 strain concentrations (-2.53 and -2.14 log10 CFU/g [P imipenem, and amoxicillin-clavulanate, respectively, although imipenem and amoxicillin-clavulanate were more efficacious than piperacillin-tazobactam). An adapted imipenem treatment (based on the time for which the serum drug concentration remained above the MIC obtained with a HI of the ATCC 25922 strain) improved its efficacy to -1.67 log10 CFU/g (P imipenem treatment of infections caused by ESBL- and non-ESBL-producing E. coli strains in patients with therapeutic failure with piperacillin-tazobactam.

  9. Simultaneously time- and space-resolved spectroscopic characterization of laser-produced plasmas

    International Nuclear Information System (INIS)

    Charatis, G.; Young, B.K.F.; Busch, G.E.

    1988-01-01

    The CHROMA laser facility at KMS Fusion has been used to irradiate a variety of microdot targets. These include aluminum dots and mixed bromine dots doped with K-shell (magnesium) emitters. Simultaneously time- and space-resolved K-shell and L-shell spectra have been measured and compared to dynamic model predictions. The electron density profiles are measured using holographic interferometry. Temperatures, densities, and ionization distributions are determined using K-shell and L-shell spectral techniques. Time and spatial gradients are resolved simultaneously using three diagnostics: a framing crystal x-ray spectrometer, an x-ray streaked crystal spectrometer with a spatial imaging slit, and a 4-frame holographic interferometer. Significant differences have been found between the interferometric and the model-dependent spectral measurements of plasma density. Predictions by new non-stationary L-shell models currently being developed are also presented. 14 refs., 10 figs

  10. Identification of neutral biochemical network models from time series data

    Directory of Open Access Journals (Sweden)

    Maia Marco

    2009-05-01

    Full Text Available Abstract Background The major difficulty in modeling biological systems from multivariate time series is the identification of parameter sets that endow a model with dynamical behaviors sufficiently similar to the experimental data. Directly related to this parameter estimation issue is the task of identifying the structure and regulation of ill-characterized systems. Both tasks are simplified if the mathematical model is canonical, i.e., if it is constructed according to strict guidelines. Results In this report, we propose a method for the identification of admissible parameter sets of canonical S-systems from biological time series. The method is based on a Monte Carlo process that is combined with an improved version of our previous parameter optimization algorithm. The method maps the parameter space into the network space, which characterizes the connectivity among components, by creating an ensemble of decoupled S-system models that imitate the dynamical behavior of the time series with sufficient accuracy. The concept of sloppiness is revisited in the context of these S-system models with an exploration not only of different parameter sets that produce similar dynamical behaviors but also different network topologies that yield dynamical similarity. Conclusion The proposed parameter estimation methodology was applied to actual time series data from the glycolytic pathway of the bacterium Lactococcus lactis and led to ensembles of models with different network topologies. In parallel, the parameter optimization algorithm was applied to the same dynamical data upon imposing a pre-specified network topology derived from prior biological knowledge, and the results from both strategies were compared. The results suggest that the proposed method may serve as a powerful exploration tool for testing hypotheses and the design of new experiments.

  11. Non-Invasive Rapid Harvest Time Determination of Oil-Producing Microalgae Cultivations for Biodiesel Production by Using Chlorophyll Fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, Yaqin [Key Laboratory of Algal Biology, Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan (China); University of Chinese Academy of Sciences, Beijing (China); Rong, Junfeng [SINOPEC Research Institute of Petroleum Processing, Beijing (China); Chen, Hui; He, Chenliu; Wang, Qiang, E-mail: wangqiang@ihb.ac.cn [Key Laboratory of Algal Biology, Institute of Hydrobiology, Chinese Academy of Sciences, Wuhan (China)

    2015-10-05

    For the large-scale cultivation of microalgae for biodiesel production, one of the key problems is the determination of the optimum time for algal harvest when algae cells are saturated with neutral lipids. In this study, a method to determine the optimum harvest time in oil-producing microalgal cultivations by measuring the maximum photochemical efficiency of photosystem II, also called Fv/Fm, was established. When oil-producing Chlorella strains were cultivated and then treated with nitrogen starvation, it not only stimulated neutral lipid accumulation, but also affected the photosynthesis system, with the neutral lipid contents in all four algae strains – Chlorella sorokiniana C1, Chlorella sp. C2, C. sorokiniana C3, and C. sorokiniana C7 – correlating negatively with the Fv/Fm values. Thus, for the given oil-producing algae, in which a significant relationship between the neutral lipid content and Fv/Fm value under nutrient stress can be established, the optimum harvest time can be determined by measuring the value of Fv/Fm. It is hoped that this method can provide an efficient way to determine the harvest time rapidly and expediently in large-scale oil-producing microalgae cultivations for biodiesel production.

  12. Comparison of dimensional accuracy of digital dental models produced from scanned impressions and scanned stone casts

    Science.gov (United States)

    Subeihi, Haitham

    Introduction: Digital models of dental arches play a more and more important role in dentistry. A digital dental model can be generated by directly scanning intraoral structures, by scanning a conventional impression of oral structures or by scanning a stone cast poured from the conventional impression. An accurate digital scan model is a fundamental part for the fabrication of dental restorations. Aims: 1. To compare the dimensional accuracy of digital dental models produced by scanning of impressions versus scanning of stone casts. 2. To compare the dimensional accuracy of digital dental models produced by scanning of impressions made of three different materials (polyvinyl siloxane, polyether or vinyl polyether silicone). Methods and Materials: This laboratory study included taking addition silicone, polyether and vinyl polyether silicone impressions from an epoxy reference model that was created from an original typodont. Teeth number 28 and 30 on the typodont with a missing tooth number 29 were prepared for a metal-ceramic three-unit fixed dental prosthesis with tooth #29 being a pontic. After tooth preparation, an epoxy resin reference model was fabricated by duplicating the typodont quadrant that included the tooth preparations. From this reference model 12 polyvinyl siloxane impressions, 12 polyether impressions and 12 vinyl polyether silicone impressions were made. All 36 impressions were scanned before pouring them with dental stone. The 36 dental stone casts were, in turn, scanned to produce digital models. A reference digital model was made by scanning the reference model. Six groups of digital models were produced. Three groups were made by scanning of the impressions obtained with the three different materials, the other three groups involved the scanning of the dental casts that resulted from pouring the impressions made with the three different materials. Groups of digital models were compared using Root Mean Squares (RMS) in terms of their

  13. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  14. Integrated use of NMR, petrel and modflow in the modeling of SAGD produced water re-injection

    International Nuclear Information System (INIS)

    Campbell, K.; Phair, C; Alloisio, S; Novotny, M; Raven, S

    2011-01-01

    In the oil industry, steam assisted gravity drainage (SAGD) is a method used to enhance oil recovery in which production water disposal is a challenge. During this process, production water is re-injected into the reservoir and operators have to verify that it will not affect the quality of the surrounding fresh groundwater. This research aimed at determining the flow path and the time that produced water would take to reach an adjacent aquifer. This study was carried out on a horizontal well pair at the Axe Lake Area in northwestern Saskatchewan, using existing site data in Petrel to create a static hydrogeological model which was then exported to Modflow to simulate injection scenarios. This innovative method provided flow path of the re-injected water and time to reach the fresh with advantages over conventional hydrogeological modeling. The innovative workflow presented herein successfully provided useful information to assess the feasibility of the SAGD project and could be used for other projects.

  15. Time-dependent Networks as Models to Achieve Fast Exact Time-table Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jacob, Rico

    2001-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  16. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  17. Simulation model for transcervical laryngeal injection providing real-time feedback.

    Science.gov (United States)

    Ainsworth, Tiffiny A; Kobler, James B; Loan, Gregory J; Burns, James A

    2014-12-01

    This study aimed to develop and evaluate a model for teaching transcervical laryngeal injections. A 3-dimensional printer was used to create a laryngotracheal framework based on de-identified computed tomography images of a human larynx. The arytenoid cartilages and intrinsic laryngeal musculature were created in silicone from clay casts and thermoplastic molds. The thyroarytenoid (TA) muscle was created with electrically conductive silicone using metallic filaments embedded in silicone. Wires connected TA muscles to an electrical circuit incorporating a cell phone and speaker. A needle electrode completed the circuit when inserted in the TA during simulated injection, providing real-time feedback of successful needle placement by producing an audible sound. Face validation by the senior author confirmed appropriate tactile feedback and anatomical realism. Otolaryngologists pilot tested the model and completed presimulation and postsimulation questionnaires. The high-fidelity simulation model provided tactile and audio feedback during needle placement, simulating transcervical vocal fold injections. Otolaryngology residents demonstrated higher comfort levels with transcervical thyroarytenoid injection on postsimulation questionnaires. This is the first study to describe a simulator for developing transcervical vocal fold injection skills. The model provides real-time tactile and auditory feedback that aids in skill acquisition. Otolaryngologists reported increased confidence with transcervical injection after using the simulator. © The Author(s) 2014.

  18. Hopf Bifurcation in a Cobweb Model with Discrete Time Delays

    Directory of Open Access Journals (Sweden)

    Luca Gori

    2014-01-01

    Full Text Available We develop a cobweb model with discrete time delays that characterise the length of production cycle. We assume a market comprised of homogeneous producers that operate as adapters by taking the (expected profit-maximising quantity as a target to adjust production and consumers with a marginal willingness to pay captured by an isoelastic demand. The dynamics of the economy is characterised by a one-dimensional delay differential equation. In this context, we show that (1 if the elasticity of market demand is sufficiently high, the steady-state equilibrium is locally asymptotically stable and (2 if the elasticity of market demand is sufficiently low, quasiperiodic oscillations emerge when the time lag (that represents the length of production cycle is high enough.

  19. Estimating model parameters for an impact-produced shock-wave simulation: Optimal use of partial data with the extended Kalman filter

    International Nuclear Information System (INIS)

    Kao, Jim; Flicker, Dawn; Ide, Kayo; Ghil, Michael

    2006-01-01

    This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from a single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand

  20. Predicting long-term catchment nutrient export: the use of nonlinear time series models

    Science.gov (United States)

    Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda

    2010-05-01

    After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the

  1. Modeling multivariate time series on manifolds with skew radial basis functions.

    Science.gov (United States)

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  2. A deformable surface model for real-time water drop animation.

    Science.gov (United States)

    Zhang, Yizhong; Wang, Huamin; Wang, Shuai; Tong, Yiying; Zhou, Kun

    2012-08-01

    A water drop behaves differently from a large water body because of its strong viscosity and surface tension under the small scale. Surface tension causes the motion of a water drop to be largely determined by its boundary surface. Meanwhile, viscosity makes the interior of a water drop less relevant to its motion, as the smooth velocity field can be well approximated by an interpolation of the velocity on the boundary. Consequently, we propose a fast deformable surface model to realistically animate water drops and their flowing behaviors on solid surfaces. Our system efficiently simulates water drop motions in a Lagrangian fashion, by reducing 3D fluid dynamics over the whole liquid volume to a deformable surface model. In each time step, the model uses an implicit mean curvature flow operator to produce surface tension effects, a contact angle operator to change droplet shapes on solid surfaces, and a set of mesh connectivity updates to handle topological changes and improve mesh quality over time. Our numerical experiments demonstrate a variety of physically plausible water drop phenomena at a real-time rate, including capillary waves when water drops collide, pinch-off of water jets, and droplets flowing over solid materials. The whole system performs orders-of-magnitude faster than existing simulation approaches that generate comparable water drop effects.

  3. Prior and posterior probabilistic models of uncertainties in a model for producing voice

    International Nuclear Information System (INIS)

    Cataldo, Edson; Sampaio, Rubens; Soize, Christian

    2010-01-01

    The aim of this paper is to use Bayesian statistics to update a probability density function related to the tension parameter, which is one of the main parameters responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. We follow a parametric approach for stochastic modeling, which requires the adoption of random variables to represent the uncertain parameters present in the cited model. For each random variable, a probability density function is constructed using the Maximum Entropy Principle and the Monte Carlo method is used to generate voice signals as the output of the model. Then, a probability density function of the voice fundamental frequency is constructed. The random variables are fit to experimental data so that the probability density function of the fundamental frequency obtained by the model can be as near as possible of a probability density function obtained from experimental data. New values are obtained experimentally for the fundamental frequency and they are used to update the probability density function of the tension parameter, via Bayes's Theorem.

  4. Time-Dependent Networks as Models to Achieve Fast Exact Time-Table Queries

    DEFF Research Database (Denmark)

    Brodal, Gert Stølting; Jacob, Rico

    2003-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  5. Time-aggregation effects on the baseline of continuous-time and discrete-time hazard models

    NARCIS (Netherlands)

    ter Hofstede, F.; Wedel, M.

    In this study we reinvestigate the effect of time-aggregation for discrete- and continuous-time hazard models. We reanalyze the results of a previous Monte Carlo study by ter Hofstede and Wedel (1998), in which the effects of time-aggregation on the parameter estimates of hazard models were

  6. Alighting and Boarding Time Model of Passengers at a LRT Station in Kuala Lumpur

    Directory of Open Access Journals (Sweden)

    Hor Peay San

    2017-01-01

    Full Text Available A research was conducted to study the factors affecting the alighting and boarding rate of passengers and establish a prediction model for alighting and boarding time of passengers for a passenger rail service in Malaysia. Data was collected at the KL Sentral LRT station during the morning and evening peak hours for a period of 5 working days. Results show that passenger behaviour, passenger volume, crowdedness in train and mixture of flow has significant effects on the alighting and boarding time though mixture of flow is not significant in the prediction model produced due to the passenger behaviour at the platform.

  7. An economic production model for time dependent demand with rework and multiple production setups

    Directory of Open Access Journals (Sweden)

    S.R. Singh

    2014-04-01

    Full Text Available In this paper, we present a model for time dependent demand with multiple productions and rework setups. Production is demand dependent and greater than the demand rate. Production facility produces items in m production setups and one rework setup (m, 1 policy. The major reason of reverse logistic and green supply chain is rework, so it reduces the cost of production and other ecological problems. Most of the researchers developed a rework model without deteriorating items. A numerical example and sensitivity analysis is shown to describe the model.

  8. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  9. Sideways wall force produced during tokamak disruptions

    Science.gov (United States)

    Strauss, H.; Paccagnella, R.; Breslau, J.; Sugiyama, L.; Jardin, S.

    2013-07-01

    A critical issue for ITER is to evaluate the forces produced on the surrounding conducting structures during plasma disruptions. We calculate the non-axisymmetric ‘sideways’ wall force Fx, produced in disruptions. Simulations were carried out of disruptions produced by destabilization of n = 1 modes by a vertical displacement event (VDE). The force depends strongly on γτwall, where γ is the mode growth rate and τwall is the wall penetration time, and is largest for γτwall = constant, which depends on initial conditions. Simulations of disruptions caused by a model of massive gas injection were also performed. It was found that the wall force increases approximately offset linearly with the displacement from the magnetic axis produced by a VDE. These results are also obtained with an analytical model. Disruptions are accompanied by toroidal variation of the plasma current Iφ. This is caused by toroidal variation of the halo current, as verified computationally and analytically.

  10. Real time wave forecasting using wind time history and numerical model

    Science.gov (United States)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  11. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  12. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  13. Spherical collapse model in time varying vacuum cosmologies

    International Nuclear Information System (INIS)

    Basilakos, Spyros; Plionis, Manolis; Sola, Joan

    2010-01-01

    We investigate the virialization of cosmic structures in the framework of flat Friedmann-Lemaitre-Robertson-Walker cosmological models, in which the vacuum energy density evolves with time. In particular, our analysis focuses on the study of spherical matter perturbations, as they decouple from the background expansion, 'turn around', and finally collapse. We generalize the spherical collapse model in the case when the vacuum energy is a running function of the Hubble rate, Λ=Λ(H). A particularly well-motivated model of this type is the so-called quantum field vacuum, in which Λ(H) is a quadratic function, Λ(H)=n 0 +n 2 H 2 , with n 0 ≠0. This model was previously studied by our team using the latest high quality cosmological data to constrain its free parameters, as well as the predicted cluster formation rate. It turns out that the corresponding Hubble expansion history resembles that of the traditional ΛCDM cosmology. We use this Λ(t)CDM framework to illustrate the fact that the properties of the spherical collapse model (virial density, collapse factor, etc.) depend on the choice of the considered vacuum energy (homogeneous or clustered). In particular, if the distribution of the vacuum energy is clustered, then, under specific conditions, we can produce more concentrated structures with respect to the homogeneous vacuum energy case.

  14. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  15. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  16. Non-invasive rapid harvest time determination of oil-producing microalgae cultivations for bio-diesel production by using Chlorophyll fluorescence

    Directory of Open Access Journals (Sweden)

    Yaqin eQiao

    2015-10-01

    Full Text Available For the large-scale cultivation of microalgae for biodiesel production, one of the key problems is the determination of the optimum time for algal harvest when algae cells are saturated with neutral lipids. In this study, a method to determine the optimum harvest time in oil-producing microalgal cultivations by measuring the maximum photochemical efficiency of photosystem II (PSII, also called Fv/Fm, was established. When oil-producing Chlorella strains were cultivated and then treated with nitrogen starvation, it not only stimulated neutral lipid accumulation, but also affected the photosynthesis system, with the neutral lipid contents in all four algae strains – Chlorella sorokiniana C1, Chlorella sp. C2, C. sorokiniana C3, C. sorokiniana C7 – correlating negatively with the Fv/Fm values. Thus, for the given oil-producing algae, in which a significant relationship between the neutral lipid content and Fv/Fm value under nutrient stress can be established, the optimum harvest time can be determined by measuring the value of Fv/Fm. It is hoped that this method can provide an efficient way to determine the harvest time rapidly and expediently in large-scale oil-producing microalgae cultivations for biodiesel production.

  17. Modeling and investigation of submerged fermentation process to produce extracellular polysaccharide using Lactobacillus confusus.

    Science.gov (United States)

    Thirugnanasambandham, K; Sivakumar, V; Prakash Maran, J

    2014-12-19

    The main objective of the present study is to investigate and optimize the Submerged fermentation (SMF) process parameters such as addition of coconut water, NaCl dose, incubation time and temperature on the production of extracellular polysaccharide (EPS) and biomass production using Lactobacillus confuses. Response surface methodology (RSM) coupled with four factors three level Box-Behnken design (BBD) was employed to model the SMF process. RSM analysis indicated good correspondence between experimental and predicted values. Three dimentional (3D) response surface plots were used to study the interactive effects of process variables on SMF process. The optimum process conditions for the maximum production of EPS and biomass were found to be as follows; addition of coconut water of 40%, NaCl dose of 15%, incubation time of 24h and temperature of 35°C. Under these conditions, 10.57 g/L of EPS and 3.9 g/L of biomass were produced. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Stochastic modelling of Listeria monocytogenes single cell growth in cottage cheese with mesophilic lactic acid bacteria from aroma producing cultures

    DEFF Research Database (Denmark)

    Østergaard, Nina Bjerre; Christiansen, Lasse Engbo; Dalgaard, Paw

    2015-01-01

    . 2014. Modelling the effect of lactic acid bacteria from starter- and aroma culture on growth of Listeria monocytogenes in cottage cheese. International Journal of Food Microbiology. 188, 15-25]. Growth of L. monocytogenes single cells, using lag time distributions corresponding to three different......A stochastic model was developed for simultaneous growth of low numbers of Listeria monocytogenes and populations of lactic acid bacteria from the aroma producing cultures applied in cottage cheese. During more than two years, different batches of cottage cheese with aroma culture were analysed...

  19. A COMPARISON OF THE TENSILE STRENGTH OF PLASTIC PARTS PRODUCED BY A FUSED DEPOSITION MODELING DEVICE

    Directory of Open Access Journals (Sweden)

    Juraj Beniak

    2015-12-01

    Full Text Available Rapid Prototyping systems are nowadays increasingly used in many areas of industry, not only for producing design models but also for producing parts for final use. We need to know the properties of these parts. When we talk about the Fused Deposition Modeling (FDM technique and FDM devices, there are many possible settings for devices and models which could influence the properties of a final part. In addition, devices based on the same principle may use different operational software for calculating the tool path, and this may have a major impact. The aim of this paper is to show the tensile strength value for parts produced from different materials on the Fused Deposition Modeling device when the horizontal orientation of the specimens is changed.

  20. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    OpenAIRE

    Rao, Yuhan; Zhu, Xiaolin; Chen, Jin; Wang, Jianmin

    2015-01-01

    Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal) NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM), is proposed to achieve the goal of accurately and efficiently bl...

  1. Application of data mining in three-dimensional space time reactor model

    International Nuclear Information System (INIS)

    Jiang Botao; Zhao Fuyu

    2011-01-01

    A high-fidelity three-dimensional space time nodal method has been developed to simulate the dynamics of the reactor core for real time simulation. This three-dimensional reactor core mathematical model can be composed of six sub-models, neutron kinetics model, cay heat model, fuel conduction model, thermal hydraulics model, lower plenum model, and core flow distribution model. During simulation of each sub-model some operation data will be produced and lots of valuable, important information reflecting the reactor core operation status could be hidden in, so how to discovery these information becomes the primary mission people concern. Under this background, data mining (DM) is just created and developed to solve this problem, no matter what engineering aspects or business fields. Generally speaking, data mining is a process of finding some useful and interested information from huge data pool. Support Vector Machine (SVM) is a new technique of data mining appeared in recent years, and SVR is a transformed method of SVM which is applied in regression cases. This paper presents only two significant sub-models of three-dimensional reactor core mathematical model, the nodal space time neutron kinetics model and the thermal hydraulics model, based on which the neutron flux and enthalpy distributions of the core are obtained by solving the three-dimensional nodal space time kinetics equations and energy equations for both single and two-phase flows respectively. Moreover, it describes that the three-dimensional reactor core model can also be used to calculate and determine the reactivity effects of the moderator temperature, boron concentration, fuel temperature, coolant void, xenon worth, samarium worth, control element positions (CEAs) and core burnup status. Besides these, the main mathematic theory of SVR is introduced briefly next, on the basis of which SVR is applied to dealing with the data generated by two sample calculation, rod ejection transient and axial

  2. A semi-empirical model for mesospheric and stratospheric NOy produced by energetic particle precipitation

    Directory of Open Access Journals (Sweden)

    B. Funke

    2016-07-01

    Full Text Available The MIPAS Fourier transform spectrometer on board Envisat has measured global distributions of the six principal reactive nitrogen (NOy compounds (HNO3, NO2, NO, N2O5, ClONO2, and HNO4 during 2002–2012. These observations were used previously to detect regular polar winter descent of reactive nitrogen produced by energetic particle precipitation (EPP down to the lower stratosphere, often called the EPP indirect effect. It has further been shown that the observed fraction of NOy produced by EPP (EPP-NOy has a nearly linear relationship with the geomagnetic Ap index when taking into account the time lag introduced by transport. Here we exploit these results in a semi-empirical model for computation of EPP-modulated NOy densities and wintertime downward fluxes through stratospheric and mesospheric pressure levels. Since the Ap dependence of EPP-NOy is distorted during episodes of strong descent in Arctic winters associated with elevated stratopause events, a specific parameterization has been developed for these episodes. This model accurately reproduces the observations from MIPAS and is also consistent with estimates from other satellite instruments. Since stratospheric EPP-NOy depositions lead to changes in stratospheric ozone with possible implications for climate, the model presented here can be utilized in climate simulations without the need to incorporate many thermospheric and upper mesospheric processes. By employing historical geomagnetic indices, the model also allows for reconstruction of the EPP indirect effect since 1850. We found secular variations of solar cycle-averaged stratospheric EPP-NOy depositions on the order of 1 GM. In particular, we model a reduction of the EPP-NOy deposition rate during the last 3 decades, related to the coincident decline of geomagnetic activity that corresponds to 1.8 % of the NOy production rate by N2O oxidation. As the decline of the geomagnetic activity level is expected to continue in the

  3. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  4. Modelling blazar flaring using a time-dependent fluid jet emission model - an explanation for orphan flares and radio lags

    Science.gov (United States)

    Potter, William J.

    2018-01-01

    Blazar jets are renowned for their rapid violent variability and multiwavelength flares, however, the physical processes responsible for these flares are not well understood. In this paper, we develop a time-dependent inhomogeneous fluid jet emission model for blazars. We model optically thick radio flares for the first time and show that they are delayed with respect to the prompt optically thin emission by ∼months to decades, with a lag that increases with the jet power and observed wavelength. This lag is caused by a combination of the travel time of the flaring plasma to the optically thin radio emitting sections of the jet and the slow rise time of the radio flare. We predict two types of flares: symmetric flares - with the same rise and decay time, which occur for flares whose duration is shorter than both the radiative lifetime and the geometric path-length delay time-scale; extended flares - whose luminosity tracks the power of particle acceleration in the flare, which occur for flares with a duration longer than both the radiative lifetime and geometric delay. Our model naturally produces orphan X-ray and γ-ray flares. These are caused by flares that are only observable above the quiescent jet emission in a narrow band of frequencies. Our model is able to successfully fit to the observed multiwavelength flaring spectra and light curves of PKS1502+106 across all wavelengths, using a transient flaring front located within the broad-line region.

  5. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  6. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  7. The HTA core model: a novel method for producing and reporting health technology assessments

    DEFF Research Database (Denmark)

    Lampe, Kristian; Mäkelä, Marjukka; Garrido, Marcial Velasco

    2009-01-01

    OBJECTIVES: The aim of this study was to develop and test a generic framework to enable international collaboration for producing and sharing results of health technology assessments (HTAs). METHODS: Ten international teams constructed the HTA Core Model, dividing information contained...... for diagnostic technologies. Two Core HTAs were produced in parallel with developing the model, providing the first real-life testing of the Model and input for further development. The results of formal validation and public feedback were primarily positive. Development needs were also identified and considered....... An online Handbook is available. CONCLUSIONS: The HTA Core Model is a novel approach to HTA. It enables effective international production and sharing of HTA results in a structured format. The face validity of the Model was confirmed during the project, but further testing and refining are needed to ensure...

  8. Neural Network Models for Time Series Forecasts

    OpenAIRE

    Tim Hill; Marcus O'Connor; William Remus

    1996-01-01

    Neural networks have been advocated as an alternative to traditional statistical forecasting methods. In the present experiment, time series forecasts produced by neural networks are compared with forecasts from six statistical time series methods generated in a major forecasting competition (Makridakis et al. [Makridakis, S., A. Anderson, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler. 1982. The accuracy of extrapolation (time series) methods: Results of a ...

  9. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    Science.gov (United States)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  10. Forecasting the Reference Evapotranspiration Using Time Series Model

    Directory of Open Access Journals (Sweden)

    H. Zare Abyaneh

    2016-10-01

    evapotranspiration were obtained. The mean values of evapotranspiration in the study period were 4.42, 3.93, 5.05, 5.49, and 5.60 mm day−1 in Esfahan, Semnan, Shiraz, Kerman, and Yazd, respectively. The Augmented Dickey-Fuller (ADF test was performed to the time series. The results showed that in all stations except Shiraz, time series had unit root and were non-stationary. The non-stationary time series became stationary at 1st difference. Using the EViews 7 software, the seasonal ARIMA models were applied to the evapotranspiration time series and R2 coefficient of determination, Durbin–Watson statistic (DW, Hannan-Quinn (HQ, Schwarz (SC and Akaike information criteria (AIC were used to determine, the best models for the stations were selected. The selected models were listed in Table 2. Moreover, information criteria (AIC, SC, and HQ were used to assess model parsimony. The independence assumption of the model residuals was confirmed by a sensitive diagnostic check. Furthermore, the homoscedasticity and normality assumptions were tested using other diagnostics tests. Table 2- The selected time series models for the stations Station\tSeasonal ARIMA model\tInformation criteria\tR2\tDW SC\tHQ\tAIC Esfahan\tARIMA(1, 1, 1×(1, 0, 112\t1.2571\t1.2840\t1.2396\t0.8800\t1.9987 Semnan\tARIMA(5, 1, 2×(1, 0, 112\t1.5665\t1.5122\t1.4770\t0.8543\t1.9911 Shiraz\tARIMA(2, 0, 3×(1, 0, 112\t1.3312\t1.2881\t1.2601\t0.9665\t1.9873 Kerman\tARIMA(5, 1, 1×(1, 0, 112\t1.8097\t1.7608\t1.8097\t0.8557\t2.0042 Yazd\tARIMA(2, 1, 3×(1, 1, 112\t1.7472\t1.7032\t1.6746\t0.5264\t1.9943 The seasonal ARIMA models presented in Table 2, were used at the 12 months (2004-2005 forecasting horizon. The results showed that the models produce good out-of-sample forecasts, which in all the stations the lowest correlation coefficient and the highest root mean square error were obtained 0.988 and 0.515 mm day−1, respectively. Conclusion: In the presented paper, reference evapotranspiration in the five synoptic

  11. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  12. Search for the standard model Higgs Boson produced in association with top quarks using the full CDF data set.

    Science.gov (United States)

    Aaltonen, T; Álvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Bae, T; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bedeschi, F; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Bisello, D; Bizjak, I; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brigliadori, L; Bromberg, C; Brucken, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Calamba, A; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chung, W H; Chung, Y S; Ciocci, M A; Clark, A; Clarke, C; Compostella, G; Connors, J; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; Dell'Orso, M; Demortier, L; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, M; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, S; Ershaidat, N; Eusebi, R; Farrington, S; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Funakoshi, Y; Furic, I; Gallinaro, M; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Grinstein, S; Grosso-Pilcher, C; Group, R C; Guimaraes da Costa, J; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kasmi, A; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kim, Y J; Kimura, N; Kirby, M; Klimenko, S; Knoepfel, K; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lecompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Limosani, A; Lin, C-J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, H; Liu, Q; Liu, T; Lockwitz, S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Maestro, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Mastrandrea, P; Matera, K; Mattson, M E; Mazzacane, A; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Noh, S Y; Norniella, O; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Poprocki, S; Potamianos, K; Prokoshin, F; Pranko, A; Ptohos, F; Punzi, G; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Riddick, T; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Sakurai, Y; Santi, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shochet, M; Shreyber-Tecker, I; Simonenko, A; Sinervo, P; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Sorin, V; Song, H; Squillacioti, P; Stancari, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Ukegawa, F; Uozumi, S; Varganov, A; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vizán, J; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Wester, W C; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamato, D; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhou, C; Zucchelli, S

    2012-11-02

    A search is presented for the standard model Higgs boson produced in association with top quarks using the full Run II proton-antiproton collision data set, corresponding to 9.45 fb(-1), collected by the Collider Detector at Fermilab. No significant excess over the expected background is observed, and 95% credibility-level upper bounds are placed on the cross section σ(ttH → lepton + missing transverse energy+jets). For a Higgs boson mass of 125 GeV/c(2), we expect to set a limit of 12.6 and observe a limit of 20.5 times the standard model rate. This represents the most sensitive search for a standard model Higgs boson in this channel to date.

  13. Search for standard model Higgs bosons produced in association with W bosons.

    Science.gov (United States)

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; González, B Alvarez; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Fernandez, P Movilla; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Griso, S Pagan; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyria, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Siegrist, J; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S; Group, R C

    2008-02-01

    We report on the results of a search for standard model Higgs bosons produced in association with W bosons from pp[over] collisions at sqrt[s]=1.96 TeV. The search uses a data sample corresponding to approximately 1 fb(-1) of integrated luminosity. Events consistent with the W-->lnu and H-->bb[over] signature are selected by triggering on a high-p(T) electron or muon candidate and tagging one or two of the jet candidates as having originated from b quarks. A neural network filter rejects a fraction of tagged charm and light-flavor jets, increasing the b-jet purity in the sample. We observe no excess lnubb[over] production beyond the background expectation, and we set 95% confidence level upper limits on the production cross section times branching fraction sigma(pp[over]-->WH)Br(H-->bb[over]) ranging from 3.9 to 1.3 pb, for specific Higgs boson mass hypotheses in the range 110 to 150 GeV/c2, respectively.

  14. Time-dependent Hartree approximation and time-dependent harmonic oscillator model

    International Nuclear Information System (INIS)

    Blaizot, J.P.

    1982-01-01

    We present an analytically soluble model for studying nuclear collective motion within the framework of the time-dependent Hartree (TDH) approximation. The model reduces the TDH equations to the Schroedinger equation of a time-dependent harmonic oscillator. Using canonical transformations and coherent states we derive a few properties of the time-dependent harmonic oscillator which are relevant for applications. We analyse the role of the normal modes in the time evolution of a system governed by TDH equations. We show how these modes couple together due to the anharmonic terms generated by the non-linearity of the theory. (orig.)

  15. Modeling the night-time CO2 4.3 μm emissions in the mesosphere/lower thermosphere

    Science.gov (United States)

    Panka, Peter; Kutepov, Alexander; Feofilov, Artem; Rezac, Ladislav; Janches, Diego

    2016-04-01

    We present a detailed non-LTE model of the night-time CO2 4.3 μm emissions in the MLT. The model accounts for various mechanisms of the non-thermal excitation of CO2 molecules and both for inter- and intra-molecular vibrational-vibrational (VV) and vibrational-translational (VT) energy exchanges. In this model, we pay a specific attention to the transfer of vibrational energy of OH(ν), produced in the chemical reaction H + O3, to the CO2(ν3) vibrational mode. With the help of this model, we simulated a set of non-LTE 4.3 μm MLT limb emissions for typical atmospheric scenarios and compared the vertical profiles of integrated radiances with the corresponding SABER/TIMED observations. The implications, which follow from this comparison, for selecting non-LTE model parameters (rate coefficients), as well as for the night-time CO2 density retrieval in the MLT are discussed.

  16. A MODEL FOR PRODUCING STABLE, BROADBAND TERAHERTZ COHERENT SYNCHROTRON RADIATION IN STORAGE RINGS

    International Nuclear Information System (INIS)

    Sannibale, Fernando; Byrd, John M.; Loftsdottir, Agusta; Martin, MichaelC.; Venturini, Marco

    2003-01-01

    We present a model for producing stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The model includes distortion of bunch shape from the synchrotron radiation (SR), enhancing higher frequency coherent emission and limits to stable emission due to a microbunching instability excited by the SR. We use this model to optimize the performance of a source for CSR emission

  17. Modeling nitrogen plasmas produced by intense electron beams

    Energy Technology Data Exchange (ETDEWEB)

    Angus, J. R.; Swanekamp, S. B.; Schumer, J. W.; Hinshelwood, D. D. [Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States); Mosher, D.; Ottinger, P. F. [Independent contractors for NRL through Engility, Inc., Alexandria, Virginia 22314 (United States)

    2016-05-15

    A new gas–chemistry model is presented to treat the breakdown of a nitrogen gas with pressures on the order of 1 Torr from intense electron beams with current densities on the order of 10 kA/cm{sup 2} and pulse durations on the order of 100 ns. For these parameter regimes, the gas transitions from a weakly ionized molecular state to a strongly ionized atomic state on the time scale of the beam pulse. The model is coupled to a 0D–circuit model using the rigid–beam approximation that can be driven by specifying the time and spatial profiles of the beam pulse. Simulation results are in good agreement with experimental measurements of the line–integrated electron density from experiments done using the Gamble II generator at the Naval Research Laboratory. It is found that the species are mostly in the ground and metastable states during the atomic phase, but that ionization proceeds predominantly through thermal ionization of optically allowed states with excitation energies close to the ionization limit.

  18. Comparison of prosthetic models produced by traditional and additive manufacturing methods.

    Science.gov (United States)

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul

    2015-08-01

    The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (Pmanufacturing method (Pmanufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.

  19. The use of simple reparameterizations to improve the efficiency of Markov chain Monte Carlo estimation for multilevel models with applications to discrete time survival models.

    Science.gov (United States)

    Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J

    2009-06-01

    We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.

  20. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  1. The value and adaptation of plant uptake models in international trade of produce treated with crop protection products

    DEFF Research Database (Denmark)

    Kennedy, C.; Anderson, J.; Snyder, N.

    2010-01-01

    Crop Protection Product (CPP) national registrations and/or international trade require magnitude and decline of residue data for treated produce. These data are used to assess human dietary risk and establish legal limits (Maximum Residue Limits, MRLs) for traded produce. The ability to predict...... residues based on limited data sets affords business value by enabling informed product development decisions about the likelihood for MRL compliance for varied product use scenarios. Predicted residues can additionally support the design and conduct of time-constrained interdependent studies required...... for product registrations. While advances in predicting residues for the case of foliar applications of CPPs have been achieved, predictions for the case of soil applications of CPPs provide additional challenge. The adaptation of a newly developed dynamic model to CPP product use scenarios will be explored...

  2. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  3. Is the Merchant Power Producer a broken model?

    International Nuclear Information System (INIS)

    Nelson, James; Simshauser, Paul

    2013-01-01

    Deregulated energy markets were founded on the Merchant Power Producer, a stand-alone generator that sold its production to the spot and short-term forward markets, underpinned by long-dated project finance. The initial enthusiasm that existed for investment in existing and new merchant power plant capacity shortly after power system deregulation has progressively dissipated, following an excess entry result. In this article, we demonstrate why this has become a global trend. Using debt-sizing parameters typically used by project banks, we model a benchmark plant, then re-simulate its performance using live energy market price data and find that such financings are no longer feasible in the absence of long-term Power Purchase Agreements. - Highlights: ► We model a hypothetical CCGT plant in QLD under project financing constraints typical of the industry. ► We simulate plant operations with live market data to analyse the results. ► We find that a plant which should represent the industry's long-run marginal cost is not a feasible investment.

  4. Time-invariant PT product and phase locking in PT -symmetric lattice models

    Science.gov (United States)

    Joglekar, Yogesh N.; Onanga, Franck Assogba; Harter, Andrew K.

    2018-01-01

    Over the past decade, non-Hermitian, PT -symmetric Hamiltonians have been investigated as candidates for both a fundamental, unitary, quantum theory and open systems with a nonunitary time evolution. In this paper, we investigate the implications of the former approach in the context of the latter. Motivated by the invariance of the PT (inner) product under time evolution, we discuss the dynamics of wave-function phases in a wide range of PT -symmetric lattice models. In particular, we numerically show that, starting with a random initial state, a universal, gain-site location dependent locking between wave-function phases at adjacent sites occurs in the PT -symmetry-broken region. Our results pave the way towards understanding the physically observable implications of time invariants in the nonunitary dynamics produced by PT -symmetric Hamiltonians.

  5. Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models

    Directory of Open Access Journals (Sweden)

    Nouar AlDahoul

    2018-01-01

    Full Text Available Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN, pretrained CNN feature extractor, and hierarchical extreme learning machine for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running. Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM. H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU, H-ELM’s training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU.

  6. Modeling terrestrial gamma ray flashes produced by relativistic feedback discharges

    Science.gov (United States)

    Liu, Ningyu; Dwyer, Joseph R.

    2013-05-01

    This paper reports a modeling study of terrestrial gamma ray flashes (TGFs) produced by relativistic feedback discharges. Terrestrial gamma ray flashes are intense energetic radiation originating from the Earth's atmosphere that has been observed by spacecraft. They are produced by bremsstrahlung interactions of energetic electrons, known as runaway electrons, with air atoms. An efficient physical mechanism for producing large fluxes of the runaway electrons to make the TGFs is the relativistic feedback discharge, where seed runaway electrons are generated by positrons and X-rays, products of the discharge itself. Once the relativistic feedback discharge becomes self-sustaining, an exponentially increasing number of relativistic electron avalanches propagate through the same high-field region inside the thundercloud until the electric field is partially discharged by the ionization created by the discharge. The modeling results indicate that the durations of the TGF pulses produced by the relativistic feedback discharge vary from tens of microseconds to several milliseconds, encompassing all durations of the TGFs observed so far. In addition, when a sufficiently large potential difference is available in thunderclouds, a self-propagating discharge known as the relativistic feedback streamer can be formed, which propagates like a conventional positive streamer. For the relativistic feedback streamer, the positive feedback mechanism of runaway electron production by the positrons and X-rays plays a similar role as the photoionization for the conventional positive streamer. The simulation results of the relativistic feedback streamer show that a sequence of TGF pulses with varying durations can be produced by the streamer. The relativistic streamer may initially propagate with a pulsed manner and turn into a continuous propagation mode at a later stage. Milliseconds long TGF pulses can be produced by the feedback streamer during its continuous propagation. However

  7. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  8. An experimentally verified model for estimating the distance resolution capability of direct time of flight 3D optical imaging systems

    International Nuclear Information System (INIS)

    Nguyen, K Q K; Fisher, E M D; Walton, A J; Underwood, I

    2013-01-01

    This report introduces a new statistical model for time-resolved photon detection in a generic single-photon-sensitive sensor array. The model is validated by comparing modelled data with experimental data collected on a single-photon avalanche diode sensor array. Data produced by the model are used alongside corresponding experimental data to calculate, for the first time, the effective distance resolution of a pulsed direct time of flight 3D optical imaging system over a range of conditions using four peak-detection algorithms. The relative performance of the algorithms is compared. The model can be used to improve the system design process and inform selection of the optimal peak-detection algorithm. (paper)

  9. A physical model to predict climate dynamics in ventilated bulk-storage of agricultural produce

    NARCIS (Netherlands)

    Lukasse, L.J.S.; Kramer-Cuppen, de J.E.; Voort, van der A.J.

    2007-01-01

    This paper presents a physical model for predicting climate dynamics in ventilated bulk-storage of agricultural produce. A well-ordered model presentation was obtained by combining an object-oriented zonal decomposition with a process-oriented decomposition through matrix¿vector notation. The

  10. Model-Checking Real-Time Control Programs

    DEFF Research Database (Denmark)

    Iversen, T. K.; Kristoffersen, K. J.; Larsen, Kim Guldstrand

    2000-01-01

    In this paper, we present a method for automatic verification of real-time control programs running on LEGO(R) RCX(TM) bricks using the verification tool UPPALL. The control programs, consisting of a number of tasks running concurrently, are automatically translated into the mixed automata model...... of UPPAAL. The fixed scheduling algorithm used by the LEGO(R) RCX(TM) processor is modeled in UPPALL, and supply of similar (sufficient) timed automata models for the environment allows analysis of the overall real-time system using the tools of UPPALL. To illustrate our technique for sorting LEGO(R) bricks...

  11. MAC-Level Communication Time Modeling and Analysis for Real-Time WSNs

    Directory of Open Access Journals (Sweden)

    STANGACIU, V.

    2016-02-01

    Full Text Available Low-level communication protocols and their timing behavior are essential to developing wireless sensor networks (WSNs able to provide the support and operating guarantees required by many current real-time applications. Nevertheless, this aspect still remains an issue in the state-of-the-art. In this paper we provide a detailed analysis of a recently proposed MAC-level communication timing model and demonstrate its usability in designing real-time protocols. The results of a large set of measurements are also presented and discussed here, in direct relation to the main time parameters of the analyzed model.

  12. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Modelling Social-Technical Attacks with Timed Automata

    DEFF Research Database (Denmark)

    David, Nicolas; David, Alexandre; Hansen, Rene Rydhof

    2015-01-01

    . In this paper we develop an approach towards modelling socio-technical systems in general and socio-technical attacks in particular, using timed automata and illustrate its application by a complex case study. Thanks to automated model checking and automata theory, we can automatically generate possible attacks...... in our model and perform analysis and simulation of both model and attack, revealing details about the specific interaction between attacker and victim. Using timed automata also allows for intuitive modelling of systems, in which quantities like time and cost can be easily added and analysed....

  14. A new G-M counter dead time model

    International Nuclear Information System (INIS)

    Lee, S.H.; Gardner, R.P.

    2000-01-01

    A hybrid G-M counter dead time model was derived by combining the idealized paralyzable and non-paralyzable models. The new model involves two parameters, which are the paralyzable and non-paralyzable dead times. The dead times used in the model are very closely related to the physical dead time of the G-M tube and its resolving time. To check the validity of the model, the decaying source method with 56 Mn was used. The corrected counting rates by the new G-M dead time model were compared with the observed counting rates obtained from the measurement and gave very good agreement within 5% up to 7x10 4 counts/s for a G-M tube with a dead time of about 300 μs

  15. Modeling Confidence and Response Time in Recognition Memory

    Science.gov (United States)

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  16. Time-Dependent Toroidal Compactification Proposals and the Bianchi Type I Model: Classical and Quantum Solutions

    Directory of Open Access Journals (Sweden)

    L. Toledo Sesma

    2016-01-01

    Full Text Available We construct an effective four-dimensional model by compactifying a ten-dimensional theory of gravity coupled with a real scalar dilaton field on a time-dependent torus. This approach is applied to anisotropic cosmological Bianchi type I model for which we study the classical coupling of the anisotropic scale factors with the two real scalar moduli produced by the compactification process. Under this approach, we present an isotropization mechanism for the Bianchi I cosmological model through the analysis of the ratio between the anisotropic parameters and the volume of the Universe which in general keeps constant or runs into zero for late times. We also find that the presence of extra dimensions in this model can accelerate the isotropization process depending on the momenta moduli values. Finally, we present some solutions to the corresponding Wheeler-DeWitt (WDW equation in the context of standard quantum cosmology.

  17. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  18. Probabilistic Survivability Versus Time Modeling

    Science.gov (United States)

    Joyner, James J., Sr.

    2016-01-01

    This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.

  19. Integrated fate modeling for exposure assessment of produced water on the Sable Island Bank (Scotian shelf, Canada).

    Science.gov (United States)

    Berry, Jody A; Wells, Peter G

    2004-10-01

    Produced water is the largest waste discharge from the production phase of oil and gas wells. Produced water is a mixture of reservoir formation water and production chemicals from the separation process. This creates a chemical mixture that has several components of toxic concern, ranging from heavy metals to soluble hydrocarbons. Analysis of potential environmental effects from produced water in the Sable Island Bank region (NS, Canada) was conducted using an integrated modeling approach according to the ecological risk assessment framework. A hydrodynamic dispersion model was used to describe the wastewater plume. A second fugacity-based model was used to describe the likely plume partitioning in the local environmental media of water, suspended sediment, biota, and sediment. Results from the integrated modeling showed that the soluble benzene and naphthalene components reach chronic no-effect concentration levels at a distance of 1.0 m from the discharge point. The partition modeling indicated that low persistence was expected because of advection forces caused by tidal currents for the Sable Island Bank system. The exposure assessment for the two soluble hydrocarbon components suggests that the risks of adverse environmental effects from produced water on Sable Island Bank are low.

  20. Discrete-time modelling of musical instruments

    International Nuclear Information System (INIS)

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  1. Time series modeling in traffic safety research.

    Science.gov (United States)

    Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue

    2018-08-01

    The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  3. Model of observed stochastic balance between work and free time supporting the LQTAI definition

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2008-01-01

    A balance differential equation between free time and money-producing work time on the national economy level is formulated in a previous paper in terms of two dimensionless quantities, the fraction of work time and the total productivity factor defined as the ratio of the Gross Domestic Product...... significant systematically balance influencing parameters on the macro economical level than those considered in the definition in the previous paper of the Life Quality Time Allocation Index....... to the total salary paid in return for work. Among the solutions there is one relation that compares surprisingly well with the relevant sequences of Danish data spanning from 1948 to 2003, and also with similar data from several other countries except for slightly different model parameter values. Statistical...

  4. Formal Modeling and Analysis of Timed Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts of ...... systems, discrete time systems, timed languages, and real-time operating systems....... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real-time......This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...

  5. Application of Stochastic Automata Networks for Creation of Continuous Time Markov Chain Models of Voltage Gating of Gap Junction Channels

    Directory of Open Access Journals (Sweden)

    Mindaugas Snipas

    2015-01-01

    Full Text Available The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC of voltage gating of gap junction (GJ channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs, which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times.

  6. Application of Stochastic Automata Networks for Creation of Continuous Time Markov Chain Models of Voltage Gating of Gap Junction Channels

    Science.gov (United States)

    Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Bukauskas, Feliksas F.

    2015-01-01

    The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times. PMID:25705700

  7. Rovibrationally Resolved Time-Dependent Collisional-Radiative Model of Molecular Hydrogen and Its Application to a Fusion Detached Plasma

    Directory of Open Access Journals (Sweden)

    Keiji Sawada

    2016-12-01

    Full Text Available A novel rovibrationally resolved collisional-radiative model of molecular hydrogen that includes 4,133 rovibrational levels for electronic states whose united atom principal quantum number is below six is developed. The rovibrational X 1 Σ g + population distribution in a SlimCS fusion demo detached divertor plasma is investigated by solving the model time dependently with an initial 300 K Boltzmann distribution. The effective reaction rate coefficients of molecular assisted recombination and of other processes in which atomic hydrogen is produced are calculated using the obtained time-dependent population distribution.

  8. Automated Predicate Abstraction for Real-Time Models

    Directory of Open Access Journals (Sweden)

    Bahareh Badban

    2009-11-01

    Full Text Available We present a technique designed to automatically compute predicate abstractions for dense real-timed models represented as networks of timed automata. We use the CIPM algorithm in our previous work which computes new invariants for timed automata control locations and prunes the model, to compute a predicate abstraction of the model. We do so by taking information regarding control locations and their newly computed invariants into account.

  9. Electricity price modeling with stochastic time change

    International Nuclear Information System (INIS)

    Borovkova, Svetlana; Schmeck, Maren Diane

    2017-01-01

    In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.

  10. Evaluation of methods to produce an image library for automatic patient model localization for dose mapping during fluoroscopically guided procedures

    Science.gov (United States)

    Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.

    2017-03-01

    The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.

  11. A Model for Industrial Real-Time Systems

    DEFF Research Database (Denmark)

    Bin Waez, Md Tawhid; Wasowski, Andrzej; Dingel, Juergen

    2015-01-01

    Introducing automated formal methods for large industrial real-time systems is an important research challenge. We propose timed process automata (TPA) for modeling and analysis of time-critical systems which can be open, hierarchical, and dynamic. The model offers two essential features for large...

  12. Constitutive model with time-dependent deformations

    DEFF Research Database (Denmark)

    Krogsbøll, Anette

    1998-01-01

    are common in time as well as size. This problem is adressed by means of a new constitutive model for soils. It is able to describe the behavior of soils at different deformation rates. The model defines time-dependent and stress-related deformations separately. They are related to each other and they occur...... was the difference in time scale between the geological process of deposition (millions of years) and the laboratory measurements of mechanical properties (minutes or hours). In addition, the time scale relevant to the production history of the oil field was interesting (days or years)....

  13. Self-produced Time Intervals Are Perceived as More Variable and/or Shorter Depending on Temporal Context in Subsecond and Suprasecond Ranges

    Directory of Open Access Journals (Sweden)

    Keita eMitani

    2016-06-01

    Full Text Available The processing of time intervals is fundamental for sensorimotor and cognitive functions. Perceptual and motor timing are often performed concurrently (e.g., playing a musical instrument. Although previous studies have shown the influence of body movements on time perception, how we perceive self-produced time intervals has remained unclear. Furthermore, it has been suggested that the timing mechanisms are distinct for the sub- and suprasecond ranges. Here, we compared perceptual performances for self-produced and passively presented time intervals in random contexts (i.e., multiple target intervals presented in a session across the sub- and suprasecond ranges (Experiment 1 and within the sub- (Experiment 2 and suprasecond (Experiment 3 ranges, and in a constant context (i.e., a single target interval presented in a session in the sub- and suprasecond ranges (Experiment 4. We show that self-produced time intervals were perceived as shorter and more variable across the sub- and suprasecond ranges and within the suprasecond range but not within the subsecond range in a random context. In a constant context, the self-produced time intervals were perceived as more variable in the suprasecond range but not in the subsecond range. The impairing effects indicate that motor timing interferes with perceptual timing. The dependence of impairment on temporal contexts suggests multiple timing mechanisms for the subsecond and suprasecond ranges. In addition, violation of the scalar property (i.e., a constant variability to target interval ratio was observed between the sub- and suprasecond ranges. The violation was clearer for motor timing than for perceptual timing. This suggests that the multiple timing mechanisms for the sub- and suprasecond ranges overlap more for perception than for motor. Moreover, the central tendency effect (i.e., where shorter base intervals are overestimated and longer base intervals are underestimated disappeared with subsecond

  14. Continuous Time Structural Equation Modeling with R Package ctsem

    Directory of Open Access Journals (Sweden)

    Charles C. Driver

    2017-04-01

    Full Text Available We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1 and time series (N = 1 data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models in the social and behavioural sciences are discrete time models. An assumption of discrete time models is that time intervals between measurements are equal, and that all subjects were assessed at the same intervals. Violations of this assumption are often ignored due to the difficulty of accounting for varying time intervals, therefore parameter estimates can be biased and the time course of effects becomes ambiguous. By using stochastic differential equations to estimate an underlying continuous process, continuous time models allow for any pattern of measurement occasions. By interfacing to OpenMx, ctsem combines the flexible specification of structural equation models with the enhanced data gathering opportunities and improved estimation of continuous time models. ctsem can estimate relationships over time for multiple latent processes, measured by multiple noisy indicators with varying time intervals between observations. Within and between effects are estimated simultaneously by modeling both observed covariates and unobserved heterogeneity. Exogenous shocks with different shapes, group differences, higher order diffusion effects and oscillating processes can all be simply modeled. We first introduce and define continuous time models, then show how to specify and estimate a range of continuous time models using ctsem.

  15. Quantitative Real-time PCR detection of putrescine-producing Gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    Kristýna Maršálková

    2017-01-01

    Full Text Available Biogenic amines are indispensable components of living cells; nevertheless these compounds could be toxic for human health in higher concentrations. Putrescine is supposed to be the major biogenic amine associated with microbial food spoilage. Development of reliable, fast and culture-independent molecular methods to detect bacteria producing biogenic amines deserves the attention, especially of the food industry in purpose to protect health. The objective of this study was to verify the newly designed primer sets for detection of two inducible genes adiA and speF together in Salmonella enterica and Escherichia coli genome by Real-time PCR. These forenamed genes encode enzymes in the metabolic pathway which leads to production of putrescine in Gram-negative bacteria. Moreover, relative expression of these genes was studied in E. coli CCM 3954 strain using Real-time PCR. In this study, sets of new primers for the detection two inducible genes (speF and adiA in Salmonella enterica and E. coli by Real-time PCR were designed and tested. Amplification efficiency of a Real-time PCR was calculated from the slope of the standard curves (adiA, speF, gapA. An efficiency in a range from 95 to 105 % for all tested reactions was achieved. The gene expression (R of adiA and speF genes in E. coli was varied depending on culture conditions. The highest gene expression of adiA and speF was observed at 6, 24 and 36 h (RadiA ~ 3, 5, 9; RspeF ~11, 10, 9; respectively after initiation of growth of this bacteria in nutrient broth medium enchired with amino acids. The results show that these primers could be used for relative quantification analysis of E. coli.

  16. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  17. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    Directory of Open Access Journals (Sweden)

    Yuhan Rao

    2015-06-01

    Full Text Available Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM, is proposed to achieve the goal of accurately and efficiently blending MODIS NDVI time-series data and multi-temporal Landsat TM/ETM+ images. This method firstly unmixes the NDVI temporal changes in MODIS time-series to different land cover types and then uses unmixed NDVI temporal changes to predict Landsat-like NDVI dataset. The test over a forest site shows high accuracy (average difference: −0.0070; average absolute difference: 0.0228; and average absolute relative difference: 4.02% and computation efficiency of NDVI-LMGM (31 seconds using a personal computer. Experiments over more complex landscape and long-term time-series demonstrated that NDVI-LMGM performs well in each stage of vegetation growing season and is robust in regions with contrasting spatial and spatial variations. Comparisons between NDVI-LMGM and current methods (i.e., Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM, Enhanced STARFM (ESTARFM and Weighted Linear Model (WLM show that NDVI-LMGM is more accurate and efficient than current methods. The proposed method will benefit land surface process research, which requires a dense NDVI time-series dataset with high spatial resolution.

  18. Identification of a parametric, discrete-time model of ankle stiffness.

    Science.gov (United States)

    Guarin, Diego L; Jalaleddini, Kian; Kearney, Robert E

    2013-01-01

    Dynamic ankle joint stiffness defines the relationship between the position of the ankle and the torque acting about it and can be separated into intrinsic and reflex components. Under stationary conditions, intrinsic stiffness can described by a linear second order system while reflex stiffness is described by Hammerstein system whose input is delayed velocity. Given that reflex and intrinsic torque cannot be measured separately, there has been much interest in the development of system identification techniques to separate them analytically. To date, most methods have been nonparametric and as a result there is no direct link between the estimated parameters and those of the stiffness model. This paper presents a novel algorithm for identification of a discrete-time model of ankle stiffness. Through simulations we show that the algorithm gives unbiased results even in the presence of large, non-white noise. Application of the method to experimental data demonstrates that it produces results consistent with previous findings.

  19. Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes

    Science.gov (United States)

    Octova, A.; Sule, R.

    2018-04-01

    Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.

  20. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.

  1. Experimental and mathematical model of the interactions in the mixed culture of links in the "producer-consumer" cycle

    Science.gov (United States)

    Pisman, T. I.; Galayda, Ya. V.

    The paper presents experimental and mathematical model of interactions between invertebrates the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis and algae Chlorella vulgaris and Scenedesmus quadricauda in the producer -- consumer aquatic biotic cycle with spatially separated components The model describes the dynamics of the mixed culture of ciliates and rotifers in the consumer component feeding on the mixed algal culture of the producer component It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P caudatum Taking into account this effect the results of investigation of the mathematical model were in qualitative agreement with the experimental results In the producer -- consumer biotic cycle it was shown that coexistence is impossible in the mixed algal culture of the producer component and in the mixed culture of invertebrates of the consumer component The ciliates P caudatum are driven out by the rotifers Brachionus plicatilis

  2. Producing Coordinate Time Series for Iraq's CORS Site for Detection Geophysical Phenomena

    Directory of Open Access Journals (Sweden)

    Oday Yaseen Mohamed Zeki Alhamadani

    2018-01-01

    Full Text Available Global Navigation Satellite Systems (GNSS have become an integral part of wide range of applications. One of these applications of GNSS is implementation of the cellular phone to locate the position of users and this technology has been employed in social media applications. Moreover, GNSS have been effectively employed in transportation, GIS, mobile satellite communications, and etc. On the other hand, the geomatics sciences use the GNSS for many practical and scientific applications such as surveying and mapping and monitoring, etc. In this study, the GNSS raw data of ISER CORS, which is located in the North of Iraq, are processed and analyzed to build up coordinate time series for the purpose of detection the Arabian tectonic plate motion over seven years and a half. Such coordinates time series have been produced very efficiently using GNSS Precise Point Positioning (PPP. The daily PPP results were processed, analyzed, and presented as coordinate time series using GPS Interactive Time Series Analysis. Furthermore, MATLAB (V.2013a is used in this study to computerize GITSA with Graphic User Interface (GUI. The objective of this study was to investigate both of the homogeneity and consistency of the Iraq CORSs GNSS raw data for detection any geophysical changes over long period of time. Additionally, this study aims to employ free online PPP services, such as CSRS_PPP software, for processing GNSS raw data for generation GNSS coordinate time series. The coordinate time series of ISER station showed a +20.9 mm per year, +27.2 mm per year, and -11.3 mm per year in the East, North, and up-down components, respectively. These findings showed a remarkable similarity with those obtained by long-term monitoring of Earth's crust deformation and movement based on global studies and this highlights the importance of using GNSS for monitoring the movement of tectonic plate motion based on CORS and online GNSS data processing services over long period of

  3. Discrete-time rewards model-checked

    NARCIS (Netherlands)

    Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.

    2003-01-01

    This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and

  4. Real-time advanced nuclear reactor core model

    International Nuclear Information System (INIS)

    Koclas, J.; Friedman, F.; Paquette, C.; Vivier, P.

    1990-01-01

    The paper describes a multi-nodal advanced nuclear reactor core model. The model is based on application of modern equivalence theory to the solution of neutron diffusion equation in real time employing the finite differences method. The use of equivalence theory allows the application of the finite differences method to cores divided into hundreds of nodes, as opposed to the much finer divisions (in the order of ten thousands of nodes) where the unmodified method is currently applied. As a result the model can be used for modelling of the core kinetics for real time full scope training simulators. Results of benchmarks, validate the basic assumptions of the model and its applicability to real-time simulation. (orig./HP)

  5. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  6. Space-time modeling of electricity spot prices

    DEFF Research Database (Denmark)

    Abate, Girum Dagnachew; Haldrup, Niels

    In this paper we derive a space-time model for electricity spot prices. A general spatial Durbin model that incorporates the temporal as well as spatial lags of spot prices is presented. Joint modeling of space-time effects is necessarily important when prices and loads are determined in a network...... in the spot price dynamics. Estimation of the spatial Durbin model show that the spatial lag variable is as important as the temporal lag variable in describing the spot price dynamics. We use the partial derivatives impact approach to decompose the price impacts into direct and indirect effects and we show...... that price effects transmit to neighboring markets and decline with distance. In order to examine the evolution of the spatial correlation over time, a time varying parameters spot price spatial Durbin model is estimated using recursive estimation. It is found that the spatial correlation within the Nord...

  7. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  8. Impedance models in time domain

    NARCIS (Netherlands)

    Rienstra, S.W.

    2005-01-01

    Necessary conditions for an impedance function are derived. Methods available in the literature are discussed. A format with recipe is proposed for an exact impedance condition in time domain on a time grid, based on the Helmholtz resonator model. An explicit solution is given of a pulse reflecting

  9. Logic Model Checking of Time-Periodic Real-Time Systems

    Science.gov (United States)

    Florian, Mihai; Gamble, Ed; Holzmann, Gerard

    2012-01-01

    In this paper we report on the work we performed to extend the logic model checker SPIN with built-in support for the verification of periodic, real-time embedded software systems, as commonly used in aircraft, automobiles, and spacecraft. We first extended the SPIN verification algorithms to model priority based scheduling policies. Next, we added a library to support the modeling of periodic tasks. This library was used in a recent application of the SPIN model checker to verify the engine control software of an automobile, to study the feasibility of software triggers for unintended acceleration events.

  10. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  11. A new timing model for calculating the intrinsic timing resolution of a scintillator detector

    International Nuclear Information System (INIS)

    Shao Yiping

    2007-01-01

    The coincidence timing resolution is a critical parameter which to a large extent determines the system performance of positron emission tomography (PET). This is particularly true for time-of-flight (TOF) PET that requires an excellent coincidence timing resolution (<<1 ns) in order to significantly improve the image quality. The intrinsic timing resolution is conventionally calculated with a single-exponential timing model that includes two parameters of a scintillator detector: scintillation decay time and total photoelectron yield from the photon-electron conversion. However, this calculation has led to significant errors when the coincidence timing resolution reaches 1 ns or less. In this paper, a bi-exponential timing model is derived and evaluated. The new timing model includes an additional parameter of a scintillator detector: scintillation rise time. The effect of rise time on the timing resolution has been investigated analytically, and the results reveal that the rise time can significantly change the timing resolution of fast scintillators that have short decay time constants. Compared with measured data, the calculations have shown that the new timing model significantly improves the accuracy in the calculation of timing resolutions

  12. Survey of time preference, delay discounting models

    Directory of Open Access Journals (Sweden)

    John R. Doyle

    2013-03-01

    Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade

  13. Local models violating Bell's inequality by time delays

    International Nuclear Information System (INIS)

    Scalera, G.C.

    1984-01-01

    The performance of ensemble averages is neither a sufficient nor a necessary condition to avoid Bell's inequality violations characteristic of nonergodic systems. Slight modifications of a local nonergodic logical model violating Bell's inequality produce a stochastic model exactly fitting the quantum-mechanical correlation function. From these considerations is appears evident that the last experiments on the existence of local hidden variables are not conclusive

  14. iVAR: a program for imputing missing data in multivariate time series using vector autoregressive models.

    Science.gov (United States)

    Liu, Siwei; Molenaar, Peter C M

    2014-12-01

    This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.

  15. Modeling transient luminous events produced by cloud to ground lightning and narrow bipolar pulses: detailed spectra and chemical impact

    Science.gov (United States)

    Perez-Invernon, F. J.; Luque, A.; Gordillo-Vazquez, F. J.

    2017-12-01

    The electromagnetic field generated by lightning discharges can produce Transient Luminous Events (TLEs) in the lower ionosphere, as previously investigated by many authors. Some recent studies suggest that narrow bipolar pulses (NBP), an impulsive and not well-established type of atmospheric electrical discharge, could also produce TLEs. The characterization and observation of such TLEs could be a source of information about the physics underlying NBP. In this work, we develop two different electrodynamical models to study the impact of lightning-driven electromagnetic fields in the lower ionosphere. The first model calculates the quasi-electrostatic field produced by a single cloud to ground lightning in the terrestrial atmosphere and its influence in the electron transport. This scheme allows us to study halos, a relatively frequent type of TLE. The second model solves the Maxwell equations for the electromagnetic field produced by a lightning discharge coupled with the Langevin's equation for the induced currents in the ionosphere. This model is useful to investigate elves, a fast TLE produced by lightning or by NBP. In addition, both models are coupled with a detailed chemistry of the electronically and vibrationally excited states of molecular nitrogen, allowing us to calculate synthetic spectra of both halos and elves. The models also include a detailed set of kinetic reactions to calculate the temporal evolution of other species. Our results suggest an important enhancement of some molecular species produced by halos, as NOx , N2 O and other metastable species. The quantification of their production could be useful to understand the role of thunderstorms in the climate of our planet. In the case of TLEs produced by NBP, our model confirms the appearance of double elves and allows us to compute their spectral characteristics.

  16. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  17. A seasonal model of contracts between a monopsonistic processor and smallholder pepper producers in Costa Rica

    NARCIS (Netherlands)

    Sáenz Segura, F.; Haese, D' M.F.C.; Schipper, R.A.

    2010-01-01

    We model the contractual arrangements between smallholder pepper (Piper nigrum L.) producers and a single processor in Costa Rica. Producers in the El Roble settlement sell their pepper to only one processing firm, which exerts its monopsonistic bargaining power by setting the purchase price of

  18. Modeling of the structure in aqueous solution of the exopolysaccharide produced by Lactobacillus helveticus 766

    NARCIS (Netherlands)

    Vliegenthart, J.F.G.; Faber, E.J.; Kuik, J.A. van; Kamerling, J.P.

    2002-01-01

    A method is described for constructing a conformational model in water of a heteropolysaccharide built up from repeating units, and is applied to the exopolysaccharide produced by Lactobacillus helveticus 766. The molecular modeling method is based on energy minima, obtained from molecular mechanics

  19. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  20. A Box-Cox normal model for response times.

    Science.gov (United States)

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  1. The cooling time of white dwarfs produced from type Ia supernovae

    International Nuclear Information System (INIS)

    Meng Xiangcun; Yang Wuming; Li Zhongmu

    2010-01-01

    Type Ia supernovae (SNe Ia) play a key role in measuring cosmological parameters, in which the Phillips relation is adopted. However, the origin of the relation is still unclear. Several parameters are suggested, e.g. the relative content of carbon to oxygen (C/O) and the central density of the white dwarf (WD) at ignition. These parameters are mainly determined by the WD's initial mass and its cooling time, respectively. Using the progenitor model developed by Meng and Yang, we present the distributions of the initial WD mass and the cooling time. We do not find any correlation between these parameters. However, we notice that as the range of the WD's mass decreases, its average value increases with the cooling time. These results could provide a constraint when simulating the SN Ia explosion, i.e. the WDs with a high C/O ratio usually have a lower central density at ignition, while those having the highest central density at ignition generally have a lower C/O ratio. The cooling time is mainly determined by the evolutionary age of secondaries, and the scatter of the cooling time decreases with the evolutionary age. Our results may indicate that WDs with a long cooling time have more uniform properties than those with a short cooling time, which may be helpful to explain why SNe Ia in elliptical galaxies have a more uniform maximum luminosity than those in spiral galaxies. (research papers)

  2. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  3. An analytic solution to the alibi query in the space-time prisms model for moving object data

    OpenAIRE

    GRIMSON, Rafael; KUIJPERS, Bart; OTHMAN, Walied

    2010-01-01

    Moving objects produce trajectories, which are stored in databases by means of finite samples of time-stamped locations. When also speed limitations in these sample points are known, space-time prisms (also called beads) (Egenhofer 2003, Miller 2005, Pfoser and Jensen 1999) can be used to model the uncertainty about an object’s location in between sample points. In this setting, a query of particular interest, that has been studied in the literature of geographic information systems (GIS), is...

  4. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  5. Long memory of financial time series and hidden Markov models with time-varying parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    Hidden Markov models are often used to capture stylized facts of daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior for the ability to reproduce the stylized...... facts have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time-varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared...... daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step predictions....

  6. Fuzzy time series forecasting model with natural partitioning length approach for predicting the unemployment rate under different degree of confidence

    Science.gov (United States)

    Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud

    2017-08-01

    Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.

  7. Modeling seasonality in bimonthly time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  8. Regolith history from cosmic-ray-produced isotopes

    International Nuclear Information System (INIS)

    Fireman, E.L.

    1974-04-01

    A statistical model is given for soil development relating meteoroid impacts on the moon to cosmic-ray-produced isotopes in the soil. By means of this model, the average lunar mass loss rate during the past 14 aeons is determined to be 170 g/sq cm aeon and the soil mixing rate to be approximately 200 cm/aeon from the gadolinium isotope data for the Apollo 15 and 16 drill stems. The isotope data also restrict the time variation of the meteoroid flux during the past 14 aeons. (U.S.)

  9. Reconstruction of ensembles of coupled time-delay systems from time series.

    Science.gov (United States)

    Sysoev, I V; Prokhorov, M D; Ponomarenko, V I; Bezruchko, B P

    2014-06-01

    We propose a method to recover from time series the parameters of coupled time-delay systems and the architecture of couplings between them. The method is based on a reconstruction of model delay-differential equations and estimation of statistical significance of couplings. It can be applied to networks composed of nonidentical nodes with an arbitrary number of unidirectional and bidirectional couplings. We test our method on chaotic and periodic time series produced by model equations of ensembles of diffusively coupled time-delay systems in the presence of noise, and apply it to experimental time series obtained from electronic oscillators with delayed feedback coupled by resistors.

  10. Linear system identification via backward-time observer models

    Science.gov (United States)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  11. Trading speed and accuracy by coding time: a coupled-circuit cortical model.

    Directory of Open Access Journals (Sweden)

    Dominic Standage

    2013-04-01

    Full Text Available Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by 'climbing' activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification.

  12. Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2016-01-01

    Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....

  13. Timing intervals using population synchrony and spike timing dependent plasticity

    Directory of Open Access Journals (Sweden)

    Wei Xu

    2016-12-01

    Full Text Available We present a computational model by which ensembles of regularly spiking neurons can encode different time intervals through synchronous firing. We show that a neuron responding to a large population of convergent inputs has the potential to learn to produce an appropriately-timed output via spike-time dependent plasticity. We explain why temporal variability of this population synchrony increases with increasing time intervals. We also show that the scalar property of timing and its violation at short intervals can be explained by the spike-wise accumulation of jitter in the inter-spike intervals of timing neurons. We explore how the challenge of encoding longer time intervals can be overcome and conclude that this may involve a switch to a different population of neurons with lower firing rate, with the added effect of producing an earlier bias in response. Experimental data on human timing performance show features in agreement with the model’s output.

  14. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  15. Assessment of surface solar irradiance derived from real-time modelling techniques and verification with ground-based measurements

    Science.gov (United States)

    Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.

    2018-02-01

    This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.

  16. Persistent producer-scrounger relationships in bats.

    Science.gov (United States)

    Harten, Lee; Matalon, Yasmin; Galli, Naama; Navon, Hagit; Dor, Roi; Yovel, Yossi

    2018-02-01

    Social foraging theory suggests that group-living animals gain from persistent social bonds, which lead to increased tolerance in competitive foraging and information sharing. Bats are among the most social mammals, often living in colonies of tens to thousands of individuals for dozens of years, yet little is known about their social foraging dynamics. We observed three captive bat colonies for over a year, quantifying >13,000 social foraging interactions. We found that individuals consistently used one of two foraging strategies, either producing (collecting) food themselves or scrounging it directly from the mouth of other individuals. Individual foraging types were consistent over at least 16 months except during the lactation period when females shifted toward producing. Scroungers intentionally selected whom to interact with when socially foraging, thus generating persistent nonrandom social relationships with two to three specific producers. These persistent producer-scrounger relationships seem to reduce aggression over time. Finally, scrounging was highly correlated with vigilance, and we hypothesize that vigilant-prone individuals turn to scrounging in the wild to mitigate the risk of landing on a potentially unsafe fruit tree. We find the bat colony to be a rich and dynamic social system, which can serve as a model to study the role that social foraging plays in the evolution of mammalian sociality. Our results highlight the importance of considering individual tendencies when exploring social behavior patterns of group-living animals. These tendencies further emphasize the necessity of studying social networks over time.

  17. Evapotranspiration sensitivity to air temperature across a snow-influenced watershed: Space-for-time substitution versus integrated watershed modeling

    Science.gov (United States)

    Jepsen, S. M.; Harmon, T. C.; Ficklin, D. L.; Molotch, N. P.; Guan, B.

    2018-01-01

    Changes in long-term, montane actual evapotranspiration (ET) in response to climate change could impact future water supplies and forest species composition. For scenarios of atmospheric warming, predicted changes in long-term ET tend to differ between studies using space-for-time substitution (STS) models and integrated watershed models, and the influence of spatially varying factors on these differences is unclear. To examine this, we compared warming-induced (+2 to +6 °C) changes in ET simulated by an STS model and an integrated watershed model across zones of elevation, substrate available water capacity, and slope in the snow-influenced upper San Joaquin River watershed, Sierra Nevada, USA. We used the Soil Water and Assessment Tool (SWAT) for the watershed modeling and a Budyko-type relationship for the STS modeling. Spatially averaged increases in ET from the STS model increasingly surpassed those from the SWAT model in the higher elevation zones of the watershed, resulting in 2.3-2.6 times greater values from the STS model at the watershed scale. In sparse, deep colluvium or glacial soils on gentle slopes, the SWAT model produced ET increases exceeding those from the STS model. However, watershed areas associated with these conditions were too localized for SWAT to produce spatially averaged ET-gains comparable to the STS model. The SWAT model results nevertheless demonstrate that such soils on high-elevation, gentle slopes will form ET "hot spots" exhibiting disproportionately large increases in ET, and concomitant reductions in runoff yield, in response to warming. Predicted ET responses to warming from STS models and integrated watershed models may, in general, substantially differ (e.g., factor of 2-3) for snow-influenced watersheds exhibiting an elevational gradient in substrate water holding capacity and slope. Long-term water supplies in these settings may therefore be more resilient to warming than STS model predictions would suggest.

  18. Modeling dyadic processes using Hidden Markov Models: A time series approach to mother-infant interactions during infant immunization.

    Science.gov (United States)

    Stifter, Cynthia A; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed.

  19. Support for the Logical Execution Time Model on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Kluge, Florian; Schoeberl, Martin; Ungerer, Theo

    2016-01-01

    The logical execution time (LET) model increases the compositionality of real-time task sets. Removal or addition of tasks does not influence the communication behavior of other tasks. In this work, we extend a multicore operating system running on a time-predictable multicore processor to support...... the LET model. For communication between tasks we use message passing on a time-predictable network-on-chip to avoid the bottleneck of shared memory. We report our experiences and present results on the costs in terms of memory and execution time....

  20. Regular transport dynamics produce chaotic travel times.

    Science.gov (United States)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  1. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    Science.gov (United States)

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2018-01-01

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley

  2. Fuzzy rule-based modelling for human health risk from naturally occurring radioactive materials in produced water

    International Nuclear Information System (INIS)

    Shakhawat, Chowdhury; Tahir, Husain; Neil, Bose

    2006-01-01

    Produced water, discharged from offshore oil and gas operations, contains chemicals from formation water, condensed water, and any chemical added down hole or during the oil/water separation process. Although, most of the contaminants fall below the detection limits within a short distance from the discharge port, a few of the remaining contaminants including naturally occurring radioactive materials (NORM) are of concern due to their bioavailability in the media and bioaccumulation characteristics in finfish and shellfish species used for human consumption. In the past, several initiatives have been taken to model human health risk from NORM in produced water. The parameters of the available risk assessment models are imprecise and sparse in nature. In this study, a fuzzy possibilistic evaluation using fuzzy rule based modeling has been presented. Being conservative in nature, the possibilistic approach considers possible input parameter values; thus provides better environmental prediction than the Monte Carlo (MC) calculation. The uncertainties of the input parameters were captured with fuzzy triangular membership functions (TFNs). Fuzzy if-then rules were applied for input concentrations of two isotopes of radium, namely 226 Ra, and 228 Ra, available in produced water and bulk dilution to evaluate the radium concentration in fish tissue used for human consumption. The bulk dilution was predicted using four input parameters: produced water discharge rate, ambient seawater velocity, depth of discharge port and density gradient. The evaluated cancer risk shows compliance with the regulatory guidelines; thus minimum risk to human health is expected from NORM components in produced water

  3. Time-resolved flow reconstruction with indirect measurements using regression models and Kalman-filtered POD ROM

    Science.gov (United States)

    Leroux, Romain; Chatellier, Ludovic; David, Laurent

    2018-01-01

    This article is devoted to the estimation of time-resolved particle image velocimetry (TR-PIV) flow fields using a time-resolved point measurements of a voltage signal obtained by hot-film anemometry. A multiple linear regression model is first defined to map the TR-PIV flow fields onto the voltage signal. Due to the high temporal resolution of the signal acquired by the hot-film sensor, the estimates of the TR-PIV flow fields are obtained with a multiple linear regression method called orthonormalized partial least squares regression (OPLSR). Subsequently, this model is incorporated as the observation equation in an ensemble Kalman filter (EnKF) applied on a proper orthogonal decomposition reduced-order model to stabilize it while reducing the effects of the hot-film sensor noise. This method is assessed for the reconstruction of the flow around a NACA0012 airfoil at a Reynolds number of 1000 and an angle of attack of {20}°. Comparisons with multi-time delay-modified linear stochastic estimation show that both the OPLSR and EnKF combined with OPLSR are more accurate as they produce a much lower relative estimation error, and provide a faithful reconstruction of the time evolution of the velocity flow fields.

  4. The space-time model according to dimensional continuous space-time theory

    International Nuclear Information System (INIS)

    Martini, Luiz Cesar

    2014-01-01

    This article results from the Dimensional Continuous Space-Time Theory for which the introductory theoretician was presented in [1]. A theoretical model of the Continuous Space-Time is presented. The wave equation of time into absolutely stationary empty space referential will be described in detail. The complex time, that is the time fixed on the infinite phase time speed referential, is deduced from the New View of Relativity Theory that is being submitted simultaneously with this article in this congress. Finally considering the inseparable Space-Time is presented the duality equation wave-particle.

  5. Time-specific ecological niche modeling predicts spatial dynamics of vector insects and human dengue cases.

    Science.gov (United States)

    Peterson, A Townsend; Martínez-Campos, Carmen; Nakazawa, Yoshinori; Martínez-Meyer, Enrique

    2005-09-01

    Numerous human diseases-malaria, dengue, yellow fever and leishmaniasis, to name a few-are transmitted by insect vectors with brief life cycles and biting activity that varies in both space and time. Although the general geographic distributions of these epidemiologically important species are known, the spatiotemporal variation in their emergence and activity remains poorly understood. We used ecological niche modeling via a genetic algorithm to produce time-specific predictive models of monthly distributions of Aedes aegypti in Mexico in 1995. Significant predictions of monthly mosquito activity and distributions indicate that predicting spatiotemporal dynamics of disease vector species is feasible; significant coincidence with human cases of dengue indicate that these dynamics probably translate directly into transmission of dengue virus to humans. This approach provides new potential for optimizing use of resources for disease prevention and remediation via automated forecasting of disease transmission risk.

  6. A method of modeling time-dependent rock damage surrounding underground excavations in multiphase groundwater flow

    International Nuclear Information System (INIS)

    Christian-Frear, T.; Freeze, G.

    1997-01-01

    Underground excavations produce damaged zones surrounding the excavations which have disturbed hydrologic and geomechanical properties. Prediction of fluid flow in these zones must consider both the mechanical and fluid flow processes. Presented here is a methodology which utilizes a mechanical model to predict damage and disturbed rock zone (DRZ) development around the excavation and then uses the predictions to develop time-dependent DRZ porosity relationships. These relationships are then used to adjust the porosity of the DRZ in the fluid flow model based upon the time and distance from the edge of the excavation. The application of this methodology is presented using a site-specific example from the Waste Isolation Pilot Plant, a US Department of Energy facility in bedded salts being evaluated for demonstration of the safe underground disposal of transuranic waste from US defense-related activities

  7. Space and time resolved spectroscopy of laser-produced plasmas: A study of density-sensitive x-ray transitions in helium-like and neon-like ions

    Energy Technology Data Exchange (ETDEWEB)

    Young, Bruce Kai Fong

    1988-09-01

    The determination of level populations and detailed population mechanisms in dense plasmas has become an increasingly important problem in atomic physics. In this work, the density variation of line intensities and level populations in aluminum K-shell and molybdenum and silver L-shell emission spectra have been measured from high-powered, laser-produced plasmas. For each case, the density dependence of the observed line emission is due to the effect of high frequency electron-ion collisions on metastable levels. The density dependent line intensities vary greatly in laser-produced plasmas and can be used to extract detailed information concerning the population kinetics and level populations of the ions. The laser-plasmas had to be fully characterized in order to clearly compare the observed density dependence with atomic theory predictions. This has been achieved through the combined use of new diagnostic instruments and microdot targets which provided simultaneously space, time, and spectrally resolved data. The plasma temperatures were determined from the slope of the hydrogen-like recombination continuum. The time resolved electron density profiles were measured using multiple frame holographic interferometry. Thus, the density dependence of K-shell spectral lines could be clearly examined, independent of assumptions concerning the dynamics of the plasma. In aluminum, the electron density dependence of various helium-like line intensity ratios were measured. Standard collisional radiative equilibrium models fail to account for the observed density dependence measured for the ''He/sub ..cap alpha..//IC'' ratio. Instead, a quasi-steady state atomic model based on a purely recombining plasma is shown to accurately predict the measured density dependence. This same recombining plasma calculation successfully models the density dependence of the high-n ''He/sub ..gamma..//He/sub ..beta../'' and ''He/sub delta

  8. Space and time resolved spectroscopy of laser-produced plasmas: A study of density-sensitive x-ray transitions in helium-like and neon-like ions

    International Nuclear Information System (INIS)

    Young, Bruce Kai Fong.

    1988-09-01

    The determination of level populations and detailed population mechanisms in dense plasmas has become an increasingly important problem in atomic physics. In this work, the density variation of line intensities and level populations in aluminum K-shell and molybdenum and silver L-shell emission spectra have been measured from high-powered, laser-produced plasmas. For each case, the density dependence of the observed line emission is due to the effect of high frequency electron-ion collisions on metastable levels. The density dependent line intensities vary greatly in laser-produced plasmas and can be used to extract detailed information concerning the population kinetics and level populations of the ions. The laser-plasmas had to be fully characterized in order to clearly compare the observed density dependence with atomic theory predictions. This has been achieved through the combined use of new diagnostic instruments and microdot targets which provided simultaneously space, time, and spectrally resolved data. The plasma temperatures were determined from the slope of the hydrogen-like recombination continuum. The time resolved electron density profiles were measured using multiple frame holographic interferometry. Thus, the density dependence of K-shell spectral lines could be clearly examined, independent of assumptions concerning the dynamics of the plasma. In aluminum, the electron density dependence of various helium-like line intensity ratios were measured. Standard collisional radiative equilibrium models fail to account for the observed density dependence measured for the ''He/sub α//IC'' ratio. Instead, a quasi-steady state atomic model based on a purely recombining plasma is shown to accurately predict the measured density dependence. This same recombining plasma calculation successfully models the density dependence of the high-n ''He/sub γ//He/sub β/'' and ''He/sub δ//He/sub β/'' helium-like resonance line intensity ratios

  9. Experimental and mathematical model of the interactions in the mixed culture of links in the “producer-consumer” cycle

    Science.gov (United States)

    Pisman, T. I.

    2009-07-01

    The paper presents a experimental and mathematical model of interactions between invertebrates (the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis) in the "producer-consumer" aquatic biotic cycle with spatially separated components. The model describes the dynamics of the mixed culture of ciliates and rotifers in the "consumer" component, feeding on the mixed algal culture of the "producer" component. It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P. caudatum. Taking into account this effect, the results of investigation of the mathematical model were in qualitative agreement with the experimental results. In the "producer-consumer" biotic cycle it was shown that coexistence is impossible in the mixed culture of invertebrates of the "consumer" component. The ciliates P. caudatum are driven out by the rotifers B. plicatilis.

  10. Time Extensions of Petri Nets for Modelling and Verification of Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Tomasz Szmuc

    2002-01-01

    Full Text Available The main aim ofthe paper is a presentation of time extensions of Petri nets appropriate for modelling and analysis of hard real-time systems. It is assumed, that the extensions must provide a model of time flow an ability to force a transition to fire within a stated timing constraint (the so-called the strong firing rule, and timing constraints represented by intervals. The presented survey includes extensions of classical Place/Transition Petri nets, as well as the ones applied to high-level Petri nets. An expressiveness of each time extension is illustrated using simple hard real-time system. The paper includes also a brief description of analysis and veryication methods related to the extensions, and a survey of software tools supporting modelling and analysis ofthe considered Petri nets.

  11. Time-dependent H-like and He-like Al lines produced by ultra-short pulse laser

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Takako; Kato, Masatoshi [National Inst. for Fusion Science, Nagoya (Japan); Shepherd, R; Young, B; More, R; Osterheld, Al

    1998-03-01

    We have performed numerical modeling of time-resolved x-ray spectra from thin foil targets heated by the LLNL Ultra-short pulse (USP) laser. The targets were aluminum foils of thickness ranging from 250 A to 1250 A, heated with 120 fsec pulses of 400 nm light from the USP laser. The laser energy was approximately 0.2 Joules, focused to a 3 micron spot size for a peak intensity near 2 x 10{sup 19} W/cm{sup 2}. Ly{alpha} and He{alpha} lines were recorded using a 900 fsec x-ray streak camera. We calculate the effective ionization, recombination and emission rate coefficients including density effects for H-like and He-like aluminum ions using a collisional radiative model. We calculate time-dependent ion abundances using these effective ionization and recombination rate coefficients. The time-dependent electron temperature and density used in the calculation are based on an analytical model for the hydrodynamic expansion of the target foils. During the laser pulse the target is ionized. After the laser heating stops, the plasma begins to recombine. Using the calculated time dependent ion abundances and the effective emission rate coefficients, we calculate the time dependent Ly{alpha} and He{alpha} lines. The calculations reproduce the main qualitative features of the experimental spectra. (author)

  12. Multinomial model and zero-inflated gamma model to study time spent on leisure time physical activity: an example of ELSA-Brasil.

    Science.gov (United States)

    Nobre, Aline Araújo; Carvalho, Marilia Sá; Griep, Rosane Härter; Fonseca, Maria de Jesus Mendes da; Melo, Enirtes Caetano Prates; Santos, Itamar de Souza; Chor, Dora

    2017-08-17

    To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil - Estudo Longitudinal de Saúde do Adulto ) have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week) spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income. The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.

  13. Effects of build parameters on linear wear loss in plastic part produced by fused deposition modeling

    Science.gov (United States)

    Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal

    2017-07-01

    Fused Deposition Modeling (FDM) is one of the prominent additive manufacturing technologies for producing polymer products. FDM is a complex additive manufacturing process that can be influenced by many process conditions. The industrial demands required from the FDM process are increasing with higher level product functionality and properties. The functionality and performance of FDM manufactured parts are greatly influenced by the combination of many various FDM process parameters. Designers and researchers always pay attention to study the effects of FDM process parameters on different product functionalities and properties such as mechanical strength, surface quality, dimensional accuracy, build time and material consumption. However, very limited studies have been carried out to investigate and optimize the effect of FDM build parameters on wear performance. This study focuses on the effect of different build parameters on micro-structural and wear performance of FDM specimens using definitive screening design based quadratic model. This would reduce the cost and effort of additive manufacturing engineer to have a systematic approachto make decision among the manufacturing parameters to achieve the desired product quality.

  14. RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.

    Science.gov (United States)

    Stránský, V; Thinová, L

    2017-11-01

    In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Mechatronic modeling of real-time wheel-rail contact

    CERN Document Server

    Bosso, Nicola; Gugliotta, Antonio; Somà, Aurelio

    2013-01-01

    Real-time simulations of the behaviour of a rail vehicle require realistic solutions of the wheel-rail contact problem which can work in a real-time mode. Examples of such solutions for the online mode have been well known and are implemented within standard and commercial tools for the simulation codes for rail vehicle dynamics. This book is the result of the research activities carried out by the Railway Technology Lab of the Department of Mechanical and Aerospace Engineering at Politecnico di Torino. This book presents work on the project for the development of a real-time wheel-rail contact model and provides the simulation results obtained with dSpace real-time hardware. Besides this, the implementation of the contact model for the development of a real-time model for the complex mechatronic system of a scaled test rig is presented in this book and may be useful for the further validation of the real-time contact model with experiments on a full scale test rig.

  16. State-space prediction model for chaotic time series

    Science.gov (United States)

    Alparslan, A. K.; Sayar, M.; Atilgan, A. R.

    1998-08-01

    A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.

  17. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  18. Resonantly produced 7 keV sterile neutrino dark matter models and the properties of Milky Way satellites.

    Science.gov (United States)

    Abazajian, Kevork N

    2014-04-25

    Sterile neutrinos produced through a resonant Shi-Fuller mechanism are arguably the simplest model for a dark matter interpretation of the origin of the recent unidentified x-ray line seen toward a number of objects harboring dark matter. Here, I calculate the exact parameters required in this mechanism to produce the signal. The suppression of small-scale structure predicted by these models is consistent with Local Group and high-z galaxy count constraints. Very significantly, the parameters necessary in these models to produce the full dark matter density fulfill previously determined requirements to successfully match the Milky Way Galaxy's total satellite abundance, the satellites' radial distribution, and their mass density profile, or the "too-big-to-fail problem." I also discuss how further precision determinations of the detailed properties of the candidate sterile neutrino dark matter can probe the nature of the quark-hadron transition, which takes place during the dark matter production.

  19. Modeling dynamic effects of promotion on interpurchase times

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractIn this paper we put forward a duration model to analyze the dynamic effects of marketing-mix variables on interpurchase times. We extend the accelerated failure-time model with an autoregressive structure. An important feature of our model is that it allows for different long-run and

  20. Review of current GPS methodologies for producing accurate time series and their error sources

    Science.gov (United States)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e

  1. Modeling the Propagation of Atmospheric Gravity Waves Produced by an Underground Nuclear Explosion using the Transfer Function Model

    Science.gov (United States)

    Bruntz, R. J.; Mayr, H. G.; Paxton, L. J.

    2017-12-01

    We will present results from the Transfer Function Model (TFM), which simulates the neutral atmosphere, from 0 to 700 km, across the entire globe (pole to pole). The TFM is able to rapidly calculate the density and temperature perturbations created by a localized impulse. We have used TFM to simulate a ground-level explosion (equivalent to an underground nuclear explosion (UNE)) and its effects on the neutral atmosphere, including the propagation of gravity waves up to ionospheric heights. At ionospheric altitudes ion-neutral interactions are expected to lead to perturbations in the electron density. These perturbations can be observed as changes in the total electron content (TEC), a feature readily observed by the globally distributed network of global navigation satellite systems (GNSS) sensors. We will discuss the time and location of the maximum atmospheric disturbances at a number of altitudes, including the peaks of several ionospheric layers, including the F2 layer, which is often treated as the major driver of changes in GNSS-TEC observations. We will also examine the drop-off of atmospheric disturbances at those altitudes, both with increasing time and distance. The 6 known underground nuclear explosions (UNEs) by North Korea in the 21st century have sparked increased interest in UNE detection through atmospheric and ionospheric observations. The latest test by North Korea (3 Sept. 2017) was the largest UNE in over 2 decades. We will compare TFM results to the analysis of previous UNEs, including some tests by North Korea, and discuss possible confounding factors in predicting the time, location, and amplitude of atmospheric and ionospheric disturbances produced by a UNE.

  2. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  3. A stochastic surplus production model in continuous time

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte

    2017-01-01

    surplus production model in continuous time (SPiCT), which in addition to stock dynamics also models the dynamics of the fisheries. This enables error in the catch process to be reflected in the uncertainty of estimated model parameters and management quantities. Benefits of the continuous-time state......Surplus production modelling has a long history as a method for managing data-limited fish stocks. Recent advancements have cast surplus production models as state-space models that separate random variability of stock dynamics from error in observed indices of biomass. We present a stochastic......-space model formulation include the ability to provide estimates of exploitable biomass and fishing mortality at any point in time from data sampled at arbitrary and possibly irregular intervals. We show in a simulation that the ability to analyse subannual data can increase the effective sample size...

  4. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  5. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  6. Residence time modeling of hot melt extrusion processes.

    Science.gov (United States)

    Reitz, Elena; Podhaisky, Helmut; Ely, David; Thommes, Markus

    2013-11-01

    The hot melt extrusion process is a widespread technique to mix viscous melts. The residence time of material in the process frequently determines the product properties. An experimental setup and a corresponding mathematical model were developed to evaluate residence time and residence time distribution in twin screw extrusion processes. The extrusion process was modeled as the convolution of a mass transport process described by a Gaussian probability function, and a mixing process represented by an exponential function. The residence time of the extrusion process was determined by introducing a tracer at the extruder inlet and measuring the tracer concentration at the die. These concentrations were fitted to the residence time model, and an adequate correlation was found. Different parameters were derived to characterize the extrusion process including the dead time, the apparent mixing volume, and a transport related axial mixing. A 2(3) design of experiments was performed to evaluate the effect of powder feed rate, screw speed, and melt viscosity of the material on the residence time. All three parameters affect the residence time of material in the extruder. In conclusion, a residence time model was developed to interpret experimental data and to get insights into the hot melt extrusion process. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Searching for the standard model Higgs boson produced by vector boson fusion in the fully hadronic four-jet topology with CMS

    CERN Document Server

    Chernyavskaya, Nadezda

    2017-01-01

    A search for the standard model Higgs boson produced by vector boson fusion in the fully hadronic four-jet topology is presented. The analysis is based on 2.3 fb$^{-1}$ of proton-proton collision data at $\\sqrt{s}$ = 13 TeV collected by CMS in 2015. Upper limits, at 95\\% confidence level, on the production cross section times branching fraction of the Higgs boson decaying to bottom quarks, are derived for a Higgs boson mass of 125 GeV. The fitted signal strength relative to the expectation for the standard model Higgs boson is obtained. Results are also combined with the ones obtained with Run1 data at $\\sqrt{s}$ = 8 TeV collected in 2012.

  8. Multinomial model and zero-inflated gamma model to study time spent on leisure time physical activity: an example of ELSA-Brasil

    Directory of Open Access Journals (Sweden)

    Aline Araújo Nobre

    2017-08-01

    Full Text Available ABSTRACT OBJECTIVE To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. METHODS Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil – Estudo Longitudinal de Saúde do Adulto have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. RESULTS The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income CONCLUSIONS The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.

  9. Continuous time modelling with individually varying time intervals for oscillating and non-oscillating processes.

    Science.gov (United States)

    Voelkle, Manuel C; Oud, Johan H L

    2013-02-01

    When designing longitudinal studies, researchers often aim at equal intervals. In practice, however, this goal is hardly ever met, with different time intervals between assessment waves and different time intervals between individuals being more the rule than the exception. One of the reasons for the introduction of continuous time models by means of structural equation modelling has been to deal with irregularly spaced assessment waves (e.g., Oud & Delsing, 2010). In the present paper we extend the approach to individually varying time intervals for oscillating and non-oscillating processes. In addition, we show not only that equal intervals are unnecessary but also that it can be advantageous to use unequal sampling intervals, in particular when the sampling rate is low. Two examples are provided to support our arguments. In the first example we compare a continuous time model of a bivariate coupled process with varying time intervals to a standard discrete time model to illustrate the importance of accounting for the exact time intervals. In the second example the effect of different sampling intervals on estimating a damped linear oscillator is investigated by means of a Monte Carlo simulation. We conclude that it is important to account for individually varying time intervals, and encourage researchers to conceive of longitudinal studies with different time intervals within and between individuals as an opportunity rather than a problem. © 2012 The British Psychological Society.

  10. Modeling Nonstationary Emotion Dynamics in Dyads using a Time-Varying Vector-Autoregressive Model.

    Science.gov (United States)

    Bringmann, Laura F; Ferrer, Emilio; Hamaker, Ellen L; Borsboom, Denny; Tuerlinckx, Francis

    2018-01-01

    Emotion dynamics are likely to arise in an interpersonal context. Standard methods to study emotions in interpersonal interaction are limited because stationarity is assumed. This means that the dynamics, for example, time-lagged relations, are invariant across time periods. However, this is generally an unrealistic assumption. Whether caused by an external (e.g., divorce) or an internal (e.g., rumination) event, emotion dynamics are prone to change. The semi-parametric time-varying vector-autoregressive (TV-VAR) model is based on well-studied generalized additive models, implemented in the software R. The TV-VAR can explicitly model changes in temporal dependency without pre-existing knowledge about the nature of change. A simulation study is presented, showing that the TV-VAR model is superior to the standard time-invariant VAR model when the dynamics change over time. The TV-VAR model is applied to empirical data on daily feelings of positive affect (PA) from a single couple. Our analyses indicate reliable changes in the male's emotion dynamics over time, but not in the female's-which were not predicted by her own affect or that of her partner. This application illustrates the usefulness of using a TV-VAR model to detect changes in the dynamics in a system.

  11. Foundation for a Time Interval Access Control Model

    National Research Council Canada - National Science Library

    Afinidad, Francis B; Levin, Timothy E; Irvine, Cynthia E; Nguyen, Thuy D

    2005-01-01

    A new model for representing temporal access control policies is introduced. In this model, temporal authorizations are represented by time attributes associated with both subjects and objects, and a time interval access graph...

  12. Characterization of Models for Time-Dependent Behavior of Soils

    DEFF Research Database (Denmark)

    Liingaard, Morten; Augustesen, Anders; Lade, Poul V.

    2004-01-01

      Different classes of constitutive models have been developed to capture the time-dependent viscous phenomena ~ creep, stress relaxation, and rate effects ! observed in soils. Models based on empirical, rheological, and general stress-strain-time concepts have been studied. The first part....... Special attention is paid to elastoviscoplastic models that combine inviscid elastic and time-dependent plastic behavior. Various general elastoviscoplastic models can roughly be divided into two categories: Models based on the concept of overstress and models based on nonstationary flow surface theory...

  13. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  14. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian S.

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini......-world are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  15. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  16. Train Dwell Time Models for Rail Passenger Service

    Directory of Open Access Journals (Sweden)

    San Hor Peay

    2016-01-01

    Full Text Available In recent years, more studies had been conducted about train dwell time as it is a key parameter of rail system performance and reliability. This paper draws an overview of train dwell time models for rail passenger service from various continents, namely Asia, North America, Europe and Australia. The factors affecting train dwell time are identified and analysed across some rail network operators. The dwell time models developed by various researches are also discussed and reviewed. Finally, the contributions from the outcomes of these models are briefly addressed. In conclusion, this paper suggests that there is a need to further study the factors with strong influence upon dwell time to improve the quality of the train services.

  17. Time dependent photon and neutrino emission from Mkr 421 in the context of the one-zone leptohadronic model

    Directory of Open Access Journals (Sweden)

    Mastichiadis Apostolos

    2013-12-01

    Full Text Available We apply a recently developed time-dependent one-zone leptohadronic model to study the emission of the blazar Mrk 421. Both processes involving proton-photon interactions, i.e. photopair (Bethe-Heitler and photopion, have been modeled in great detail using the results of Monte Carlo simulations, like the SOPHIA event generator, in a self-consistent scheme that couples energy losses and secondary injection. We find that TeV gamma-rays can be attributed to synchrotron radiation either from relativistic protons or, alternatively, from secondary leptons produced via photohadronic processes. We also study the variability patterns that each scenario predicts and we find that while the former is more energetically favored, it is the latter that produces, in a more natural way, the usual quadratic behavior between X-rays and TeV gamma-rays. We also use the obtained SEDs to calculate in detail the expected neutron and neutrino fluxes that each model predicts.

  18. Reduction of time for producing and acclimatizing two bamboo species in a greenhouse

    Directory of Open Access Journals (Sweden)

    Giovanni Aquino Gasparetto

    2013-03-01

    Full Text Available China has been investing in bamboo cultivation in Brazilian lands. However, there’s a significant deficit of seedling production for civil construction and the charcoal and cellulose sectors, something which compromises a part of the forestry sector. In order to contribute so that the bamboo production chain solves this problem, this study aimed to check whether the application of indole acetic acid (IAA could promote plant growth in a shorter cultivation time. In the study, Bambusa vulgaris and B. vulgaris var. vitatta stakes underwent two treatments (0.25% and 5.0% of IAA and they were grown on washed sand in a greenhouse. Number of leaves, stem growth, rooting, and chlorophyll content were investigated. There was no difference with regard to stem growth, root length, and number of leaves for both species in the two treatments (0.25% and 5% IAA. The chlorophyll content variation between the two species may constitute a quality parameter of forest seedling when compared to other bamboo species. After 43 days, the seedlings are ready for planting in areas of full sun. For the species studied here, the average time to the seedling sale is from 4 to 6 months, with no addition of auxin. Using this simple and low cost technique, several nurserymen will produce bamboo seedlings with reduced time, costs, and manpower.

  19. Integral-Value Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions on prefere...... on preferences between real- or vector-valued outcomes over continuous time are satisfied if and only if the preferences are represented by a value function having an integral form......Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions...

  20. A Time- and Cost-Saving Method of Producing Rat Polyclonal Antibodies

    International Nuclear Information System (INIS)

    Wakayama, Tomohiko; Kato, Yukio; Utsumi, Rie; Tsuji, Akira; Iseki, Shoichi

    2006-01-01

    Producing antibodies usually takes more than three months. In the present study, we introduce a faster way of producing polyclonal antibodies based on preparation of the recombinant oligopeptide as antigen followed by immunization of rats. Using this method, we produced antisera against two mouse proteins: ERGIC-53 and c-Kit. An expression vector ligated with a pair of complementary synthetic oligodeoxyribonucleotides encoding the protein was introduced into bacteria, and the recombinant oligopeptide fused with the carrier protein glutathione-S-transferase was purified. Wistar rats were immunized by injecting the emulsified antigen subcutaneously into the hind footpads, followed by a booster injection after 2 weeks. One week after the booster, the sera were collected and examined for the antibody titer by immunohistochemistry. Antisera with 1600-fold titer at the maximum were obtained for both antigens and confirmed for their specificity by Western blotting. Anti-ERGIC-53 antisera recognized acinar cells in the sublingual gland, and anti-c-Kit antisera recognized spermatogenic and Leydig cells in the testis. These antisera were applicable to fluorescent double immunostaining with mouse monoclonal or rabbit polyclonal antibodies. Consequently, this method enabled us to produce specific rat polyclonal antisera available for immunohistochemistry in less than one month at a relatively low cost

  1. Modeling of the time sharing for lecturers

    Directory of Open Access Journals (Sweden)

    E. Yu. Shakhova

    2017-01-01

    Full Text Available In the context of modernization of the Russian system of higher education, it is necessary to analyze the working time of the university lecturers, taking into account both basic job functions as the university lecturer, and others.The mathematical problem is presented for the optimal working time planning for the university lecturers. The review of the documents, native and foreign works on the study is made. Simulation conditions, based on analysis of the subject area, are defined. Models of optimal working time sharing of the university lecturers («the second half of the day» are developed and implemented in the system MathCAD. Optimal solutions have been obtained.Three problems have been solved:1 to find the optimal time sharing for «the second half of the day» in a certain position of the university lecturer;2 to find the optimal time sharing for «the second half of the day» for all positions of the university lecturers in view of the established model of the academic load differentiation;3 to find the volume value of the non-standardized part of time work in the department for the academic year, taking into account: the established model of an academic load differentiation, distribution of the Faculty number for the positions and the optimal time sharing «the second half of the day» for the university lecturers of the department.Examples are given of the analysis results. The practical application of the research: the developed models can be used when planning the working time of an individual professor in the preparation of the work plan of the university department for the academic year, as well as to conduct a comprehensive analysis of the administrative decisions in the development of local university regulations.

  2. Extended causal modeling to assess Partial Directed Coherence in multiple time series with significant instantaneous interactions.

    Science.gov (United States)

    Faes, Luca; Nollo, Giandomenico

    2010-11-01

    The Partial Directed Coherence (PDC) and its generalized formulation (gPDC) are popular tools for investigating, in the frequency domain, the concept of Granger causality among multivariate (MV) time series. PDC and gPDC are formalized in terms of the coefficients of an MV autoregressive (MVAR) model which describes only the lagged effects among the time series and forsakes instantaneous effects. However, instantaneous effects are known to affect linear parametric modeling, and are likely to occur in experimental time series. In this study, we investigate the impact on the assessment of frequency domain causality of excluding instantaneous effects from the model underlying PDC evaluation. Moreover, we propose the utilization of an extended MVAR model including both instantaneous and lagged effects. This model is used to assess PDC either in accordance with the definition of Granger causality when considering only lagged effects (iPDC), or with an extended form of causality, when we consider both instantaneous and lagged effects (ePDC). The approach is first evaluated on three theoretical examples of MVAR processes, which show that the presence of instantaneous correlations may produce misleading profiles of PDC and gPDC, while ePDC and iPDC derived from the extended model provide here a correct interpretation of extended and lagged causality. It is then applied to representative examples of cardiorespiratory and EEG MV time series. They suggest that ePDC and iPDC are better interpretable than PDC and gPDC in terms of the known cardiovascular and neural physiologies.

  3. A Time-Frequency Auditory Model Using Wavelet Packets

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    1996-01-01

    A time-frequency auditory model is presented. The model uses the wavelet packet analysis as the preprocessor. The auditory filters are modelled by the rounded exponential filters, and the excitation is smoothed by a window function. By comparing time-frequency excitation patterns it is shown...... that the change in the time-frequency excitation pattern introduced when a test tone at masked threshold is added to the masker is approximately equal to 7 dB for all types of maskers. The classic detection ratio therefore overrates the detection efficiency of the auditory system....

  4. Modeling biological pathway dynamics with timed automata.

    Science.gov (United States)

    Schivo, Stefano; Scholma, Jetse; Wanders, Brend; Urquidi Camacho, Ricardo A; van der Vet, Paul E; Karperien, Marcel; Langerak, Rom; van de Pol, Jaco; Post, Janine N

    2014-05-01

    Living cells are constantly subjected to a plethora of environmental stimuli that require integration into an appropriate cellular response. This integration takes place through signal transduction events that form tightly interconnected networks. The understanding of these networks requires capturing their dynamics through computational support and models. ANIMO (analysis of Networks with Interactive Modeling) is a tool that enables the construction and exploration of executable models of biological networks, helping to derive hypotheses and to plan wet-lab experiments. The tool is based on the formalism of Timed Automata, which can be analyzed via the UPPAAL model checker. Thanks to Timed Automata, we can provide a formal semantics for the domain-specific language used to represent signaling networks. This enforces precision and uniformity in the definition of signaling pathways, contributing to the integration of isolated signaling events into complex network models. We propose an approach to discretization of reaction kinetics that allows us to efficiently use UPPAAL as the computational engine to explore the dynamic behavior of the network of interest. A user-friendly interface hides the use of Timed Automata from the user, while keeping the expressive power intact. Abstraction to single-parameter kinetics speeds up construction of models that remain faithful enough to provide meaningful insight. The resulting dynamic behavior of the network components is displayed graphically, allowing for an intuitive and interactive modeling experience.

  5. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin

    2015-01-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280

  6. Continuous time modelling of dynamical spatial lattice data observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper

    2007-01-01

    Summary. We consider statistical and computational aspects of simulation-based Bayesian inference for a spatial-temporal model based on a multivariate point process which is only observed at sparsely distributed times. The point processes are indexed by the sites of a spatial lattice......, and they exhibit spatial interaction. For specificity we consider a particular dynamical spatial lattice data set which has previously been analysed by a discrete time model involving unknown normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared...... with discrete time processes in the setting of the present paper as well as other spatial-temporal situations....

  7. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  8. A model of interval timing by neural integration.

    Science.gov (United States)

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  9. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  10. Time series analysis based on two-part models for excessive zero count data to detect farm-level outbreaks of swine echinococcosis during meat inspections.

    Science.gov (United States)

    Adachi, Yasumoto; Makita, Kohei

    2017-12-01

    Echinococcus multilocularis is a parasite that causes highly pathogenic zoonoses and is maintained in foxes and rodents on Hokkaido Island, Japan. Detection of E. multilocularis infections in swine is epidemiologically important. In Hokkaido, administrative information is provided to swine producers based on the results of meat inspections. However, as the current criteria for providing administrative information often results in delays in providing information to producers, novel criteria are needed. Time series models were developed to monitor autocorrelations between data and lags using data collected from 84 producers at the Higashi-Mokoto Meat Inspection Center between April 2003 and November 2015. The two criteria were quantitatively compared using the sign test for the ability to rapidly detect farm-level outbreaks. Overall, the time series models based on an autoexponentially regressed zero-inflated negative binomial distribution with 60th percentile cumulative distribution function of the model detected outbreaks earlier more frequently than the current criteria (90.5%, 276/305, ppart model with autoexponential regression can adequately deal with data involving an excessive number of zeros and that the novel criteria overcome disadvantages of the current criteria to provide an earlier indication of increases in the rate of echinococcosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  12. A continuous-time control model on production planning network ...

    African Journals Online (AJOL)

    A continuous-time control model on production planning network. DEA Omorogbe, MIU Okunsebor. Abstract. In this paper, we give a slightly detailed review of Graves and Hollywood model on constant inventory tactical planning model for a job shop. The limitations of this model are pointed out and a continuous time ...

  13. Semi-empirical model for the generation of dose distributions produced by a scanning electron beam

    International Nuclear Information System (INIS)

    Nath, R.; Gignac, C.E.; Agostinelli, A.G.; Rothberg, S.; Schulz, R.J.

    1980-01-01

    There are linear accelerators (Sagittaire and Saturne accelerators produced by Compagnie Generale de Radiologie (CGR/MeV) Corporation) which produce broad, flat electron fields by magnetically scanning the relatively narrow electron beam as it emerges from the accelerator vacuum system. A semi-empirical model, which mimics the scanning action of this type of accelerator, was developed for the generation of dose distributions in homogeneous media. The model employs the dose distributions of the scanning electron beams. These were measured with photographic film in a polystyrene phantom by turning off the magnetic scanning system. The mean deviation calculated from measured dose distributions is about 0.2%; a few points have deviations as large as 2 to 4% inside of the 50% isodose curve, but less than 8% outside of the 50% isodose curve. The model has been used to generate the electron beam library required by a modified version of a commercially-available computerized treatment-planning system. (The RAD-8 treatment planning system was purchased from the Digital Equipment Corporation. It is currently available from Electronic Music Industries

  14. Foundations of Sequence-to-Sequence Modeling for Time Series

    OpenAIRE

    Kuznetsov, Vitaly; Mariet, Zelda

    2018-01-01

    The availability of large amounts of time series data, paired with the performance of deep-learning algorithms on a broad class of problems, has recently led to significant interest in the use of sequence-to-sequence models for time series forecasting. We provide the first theoretical analysis of this time series forecasting framework. We include a comparison of sequence-to-sequence modeling to classical time series models, and as such our theory can serve as a quantitative guide for practiti...

  15. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  16. A latent process model for forecasting multiple time series in environmental public health surveillance.

    Science.gov (United States)

    Morrison, Kathryn T; Shaddick, Gavin; Henderson, Sarah B; Buckeridge, David L

    2016-08-15

    This paper outlines a latent process model for forecasting multiple health outcomes arising from a common environmental exposure. Traditionally, surveillance models in environmental health do not link health outcome measures, such as morbidity or mortality counts, to measures of exposure, such as air pollution. Moreover, different measures of health outcomes are treated as independent, while it is known that they are correlated with one another over time as they arise in part from a common underlying exposure. We propose modelling an environmental exposure as a latent process, and we describe the implementation of such a model within a hierarchical Bayesian framework and its efficient computation using integrated nested Laplace approximations. Through a simulation study, we compare distinct univariate models for each health outcome with a bivariate approach. The bivariate model outperforms the univariate models in bias and coverage of parameter estimation, in forecast accuracy and in computational efficiency. The methods are illustrated with a case study using healthcare utilization and air pollution data from British Columbia, Canada, 2003-2011, where seasonal wildfires produce high levels of air pollution, significantly impacting population health. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.

    Science.gov (United States)

    Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry

    2012-12-01

    Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.

  18. Three dimensional modeling on airflow, heat and mass transfer in partially impermeable enclosure containing agricultural produce during natural convective cooling

    International Nuclear Information System (INIS)

    Chourasia, M.K.; Goswami, T.K.

    2007-01-01

    A three dimensional model was developed to simulate the transport phenomena in heat and mass generating porous medium cooled under natural convective environment. Unlike the previous works on this aspect, the present model was aimed for bulk stored agricultural produce contained in a permeable package placed on a hard surface. This situation made the bottom of the package impermeable to fluid flow as well as moisture transfer and adiabatic to heat transfer. The velocity vectors, isotherms and contours of rate of moisture loss were presented during transient cooling as well as at steady state using the commercially available computational fluid dynamics (CFD) code based on the finite volume technique. The CFD model was validated using the experimental data on the time-temperature history as well as weight loss obtained from a bag of potatoes kept in a cold store. The simulated and experimental values on temperature and moisture loss of the product were found to be in good agreement

  19. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Directory of Open Access Journals (Sweden)

    Svetlana Postnova

    Full Text Available Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8 in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  20. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  1. Adaptation to Shift Work: Physiologically Based Modeling of the Effects of Lighting and Shifts’ Start Time

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A.; Postnov, Dmitry D.

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers’ sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers’ adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21∶00 instead of 00∶00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters. PMID:23308206

  2. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    Science.gov (United States)

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  3. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  4. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields.

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin; Strawn, Laura K

    2016-02-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  5. Time delays, population, and economic development

    Science.gov (United States)

    Gori, Luca; Guerrini, Luca; Sodini, Mauro

    2018-05-01

    This research develops an augmented Solow model with population dynamics and time delays. The model produces either a single stationary state or multiple stationary states (able to characterise different development regimes). The existence of time delays may cause persistent fluctuations in both economic and demographic variables. In addition, the work identifies in a simple way the reasons why economics affects demographics and vice versa.

  6. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  7. Opioid Mechanism Involvement in the Synergism Produced by the Combination of Diclofenac and Caffeine in the Formalin Model

    OpenAIRE

    Flores-Ramos, Jos? Mar?a; D?az-Reval, M. Irene

    2013-01-01

    Analgesics can be administered in combination with caffeine for improved analgesic effectiveness in a process known as synergism. The mechanisms by which these combinations produce synergism are not yet fully understood. The aim of this study was to analyze whether the administration of diclofenac combined with caffeine produced antinociceptive synergism and whether opioid mechanisms played a role in this event. The formalin model was used to evaluate the antinociception produced by the oral ...

  8. Attributing impacts to emissions traced to major fossil energy and cement producers over specific historical time periods

    Science.gov (United States)

    Ekwurzel, B.; Frumhoff, P. C.; Allen, M. R.; Boneham, J.; Heede, R.; Dalton, M. W.; Licker, R.

    2017-12-01

    Given the progress in climate change attribution research over the last decade, attribution studies can inform policymakers guided by the UNFCCC principle of "common but differentiated responsibilities." Historically this has primarily focused on nations, yet requests for information on the relative role of the fossil energy sector are growing. We present an approach that relies on annual CH4 and CO2 emissions from production through to the sale of products from the largest industrial fossil fuel and cement production company records from the mid-nineteenth century to present (Heede 2014). Analysis of the global trends with all the natural and human drivers compared with a scenario without the emissions traced to major carbon producers over full historical versus select periods of recent history can be policy relevant. This approach can be applied with simple climate models and earth system models depending on the type of climate impacts being investigated. For example, results from a simple climate model, using best estimate parameters and emissions traced to 90 largest carbon producers, illustrate the relative difference in global mean surface temperature increase over 1880-2010 after removing these emissions from 1980-2010 (29-35%) compared with removing these emissions over 1880-2010 (42-50%). The changing relative contributions from the largest climate drivers can be important to help assess the changing risks for stakeholders adapting to and reducing exposure and vulnerability to regional climate change impacts.

  9. FOOTPRINT: A Screening Model for Estimating the Area of a Plume Produced From Gasoline Containing Ethanol

    Science.gov (United States)

    FOOTPRINT is a screening model used to estimate the length and surface area of benzene, toluene, ethylbenzene, and xylene (BTEX) plumes in groundwater, produced from a gasoline spill that contains ethanol.

  10. Global transportation scenarios in the multi-regional EFDA-TIMES energy model

    International Nuclear Information System (INIS)

    Muehlich, P.; Hamacher, T.

    2009-01-01

    The aim of this study is to assess the potential impact of the transportation sector on the role of fusion power in the energy system of the 21st century. Key indicators in this context are global passenger and freight transportation activities, consumption levels of fuels used for transportation purposes, the electricity generation mix and greenhouse gas emissions. These quantities are calculated by means of the global multi-regional EFDA-TIMES energy system model. For the present study a new transportation module has been linked to the EFDA-TIMES framework in order to arrive at a consistent projection of future transportation demands. Results are discussed implying various global energy scenarios including assumed crossovers of road transportation activities towards hydrogen or electricity infrastructures and atmospheric CO 2 concentration stabilization levels at 550 ppm and 450 ppm. Our results show that the penetration of fusion power plants is only slightly sensitive to transportation fuel choices but depends strongly on assumed climate policies. In the most stringent case considered here the contribution of electricity produced by fusion power plants can become as large as about 50% at the end of the 21st century. This statement, however, is still of preliminary nature as the EFDA-TIMES project has not yet reached a final status.

  11. Space-time modeling of soil moisture

    Science.gov (United States)

    Chen, Zijuan; Mohanty, Binayak P.; Rodriguez-Iturbe, Ignacio

    2017-11-01

    A physically derived space-time mathematical representation of the soil moisture field is carried out via the soil moisture balance equation driven by stochastic rainfall forcing. The model incorporates spatial diffusion and in its original version, it is shown to be unable to reproduce the relative fast decay in the spatial correlation functions observed in empirical data. This decay resulting from variations in local topography as well as in local soil and vegetation conditions is well reproduced via a jitter process acting multiplicatively over the space-time soil moisture field. The jitter is a multiplicative noise acting on the soil moisture dynamics with the objective to deflate its correlation structure at small spatial scales which are not embedded in the probabilistic structure of the rainfall process that drives the dynamics. These scales of order of several meters to several hundred meters are of great importance in ecohydrologic dynamics. Properties of space-time correlation functions and spectral densities of the model with jitter are explored analytically, and the influence of the jitter parameters, reflecting variabilities of soil moisture at different spatial and temporal scales, is investigated. A case study fitting the derived model to a soil moisture dataset is presented in detail.

  12. Search for the Standard Model Higgs boson produced by vector-boson fusion and decaying to bottom quarks in √s=8 TeV pp collisions with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Aaboud, M. [Faculté des Sciences, Université Mohamed Premier and LPTPM, Oujda (Morocco); Aad, G. [CPPM, Aix-Marseille Université and CNRS/IN2P3, Marseille (France); Abbott, B. [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK (United States); Abdallah, J. [University of Iowa, Iowa City, IA (United States); Collaboration: ATLAS Collaboration; and others

    2016-11-21

    A search with the ATLAS detector is presented for the Standard Model Higgs boson produced by vector-boson fusion and decaying to a pair of bottom quarks, using 20.2 fb{sup −1} of LHC proton-proton collision data at √s=8 TeV. The signal is searched for as a resonance in the invariant mass distribution of a pair of jets containing b-hadrons in vector-boson-fusion candidate events. The yield is measured to be −0.8±2.3 times the Standard Model cross-section for a Higgs boson mass of 125 GeV. The upper limit on the cross-section times the branching ratio is found to be 4.4 times the Standard Model cross-section at the 95% confidence level, consistent with the expected limit value of 5.4 (5.7) in the background-only (Standard Model production) hypothesis.

  13. Preference as a Function of Active Interresponse Times: A Test of the Active Time Model

    Science.gov (United States)

    Misak, Paul; Cleaveland, J. Mark

    2011-01-01

    In this article, we describe a test of the active time model for concurrent variable interval (VI) choice. The active time model (ATM) suggests that the time since the most recent response is one of the variables controlling choice in concurrent VI VI schedules of reinforcement. In our experiment, pigeons were trained in a multiple concurrent…

  14. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    Science.gov (United States)

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  15. The manifold model for space-time

    International Nuclear Information System (INIS)

    Heller, M.

    1981-01-01

    Physical processes happen on a space-time arena. It turns out that all contemporary macroscopic physical theories presuppose a common mathematical model for this arena, the so-called manifold model of space-time. The first part of study is an heuristic introduction to the concept of a smooth manifold, starting with the intuitively more clear concepts of a curve and a surface in the Euclidean space. In the second part the definitions of the Csub(infinity) manifold and of certain structures, which arise in a natural way from the manifold concept, are given. The role of the enveloping Euclidean space (i.e. of the Euclidean space appearing in the manifold definition) in these definitions is stressed. The Euclidean character of the enveloping space induces to the manifold local Euclidean (topological and differential) properties. A suggestion is made that replacing the enveloping Euclidean space by a discrete non-Euclidean space would be a correct way towards the quantization of space-time. (author)

  16. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  17. Maximizing time from the constraining European Working Time Directive (EWTD): The Heidelberg New Working Time Model.

    Science.gov (United States)

    Schimmack, Simon; Hinz, Ulf; Wagner, Andreas; Schmidt, Thomas; Strothmann, Hendrik; Büchler, Markus W; Schmitz-Winnenthal, Hubertus

    2014-01-01

    The introduction of the European Working Time Directive (EWTD) has greatly reduced training hours of surgical residents, which translates into 30% less surgical and clinical experience. Such a dramatic drop in attendance has serious implications such compromised quality of medical care. As the surgical department of the University of Heidelberg, our goal was to establish a model that was compliant with the EWTD while avoiding reduction in quality of patient care and surgical training. We first performed workload analyses and performance statistics for all working areas of our department (operation theater, emergency room, specialized consultations, surgical wards and on-call duties) using personal interviews, time cards, medical documentation software as well as data of the financial- and personnel-controlling sector of our administration. Using that information, we specifically designed an EWTD-compatible work model and implemented it. Surgical wards and operating rooms (ORs) were not compliant with the EWTD. Between 5 pm and 8 pm, three ORs were still operating two-thirds of the time. By creating an extended work shift (7:30 am-7:30 pm), we effectively reduced the workload to less than 49% from 4 pm and 8 am, allowing the combination of an eight-hour working day with a 16-hour on call duty; thus, maximizing surgical resident training and ensuring patient continuity of care while maintaining EDTW guidelines. A precise workload analysis is the key to success. The Heidelberg New Working Time Model provides a legal model, which, by avoiding rotating work shifts, assures quality of patient care and surgical training.

  18. Thermodynamic modelling of an onsite methanation reactor for upgrading producer gas from commercial small scale biomass gasifiers.

    Science.gov (United States)

    Vakalis, S; Malamis, D; Moustakas, K

    2018-06-15

    Small scale biomass gasifiers have the advantage of having higher electrical efficiency in comparison to other conventional small scale energy systems. Nonetheless, a major drawback of small scale biomass gasifiers is the relatively poor quality of the producer gas. In addition, several EU Member States are seeking ways to store the excess energy that is produced from renewables like wind power and hydropower. A recent development is the storage of energy by electrolysis of water and the production of hydrogen in a process that is commonly known as "power-to-gas". The present manuscript proposes an onsite secondary reactor for upgrading producer gas by mixing it with hydrogen in order to initiate methanation reactions. A thermodynamic model has been developed for assessing the potential of the proposed methanation process. The model utilized input parameters from a representative small scale biomass gasifier and molar ratios of hydrogen from 1:0 to 1:4.1. The Villar-Cruise-Smith algorithm was used for minimizing the Gibbs free energy. The model returned the molar fractions of the permanent gases, the heating values and the Wobbe Index. For mixtures of hydrogen and producer gas on a 1:0.9 ratio the increase of the heating value is maximized with an increase of 78%. For ratios higher than 1:3, the Wobbe index increases significantly and surpasses the value of 30 MJ/Nm 3 . Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Real-time modeling of heat distributions

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  20. Bose-Einstein correlation of particles produced by expanding sources

    International Nuclear Information System (INIS)

    Hama, Y.; Padula, S.S.

    1988-01-01

    Bose-Einstein correlation is discussed for particles produced by rapidly expanding sources, when kinematical effects hinder a direct relation between the observed correlations and the source dimensions. Some of these effects are illustrated by considering Landau's hydrodynamical model wherein each space-time point of the fluid with temperature T = T/sub c/≅m/sub π/ is taken as an independent and chaotic emitting center with a Planck spectral distribution. In particular, this model reproduces surprisingly well the observed π-π and K-K correlations at the CERN ISR

  1. Robust Real-Time Musculoskeletal Modeling Driven by Electromyograms.

    Science.gov (United States)

    Durandau, Guillaume; Farina, Dario; Sartori, Massimo

    2018-03-01

    Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man-machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.

  2. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  3. Characterization of multiple antilisterial peptides produced by sakacin P-producing Lactobacillus sakei subsp. sakei 2a.

    Science.gov (United States)

    Carvalho, Kátia G; Bambirra, Felipe H S; Nicoli, Jacques R; Oliveira, Jamil S; Santos, Alexandre M C; Bemquerer, Marcelo P; Miranda, Antonio; Franco, Bernadette D G M

    2018-05-01

    Antimicrobial compounds produced by lactic acid bacteria can be explored as natural food biopreservatives. In a previous report, the main antimicrobial compounds produced by the Brazilian meat isolate Lactobacillus sakei subsp. sakei 2a, i.e., bacteriocin sakacin P and two ribosomal peptides (P2 and P3) active against Listeria monocytogenes, were described. In this study, we report the spectrum of activity, molecular mass, structural identity and mechanism of action of additional six antilisterial peptides produced by Lb. sakei 2a, detected in a 24 h-culture in MRS broth submitted to acid treatment (pH 1.5) and proper fractionation and purification steps for obtention of free and cell-bound proteins. The six peptides presented similarity to different ribosomal proteins of Lb. sakei subsp sakei 23K and the molecular masses varied from 4.6 to 11.0 kDa. All peptides were capable to increase the efflux of ATP and decrease the membrane potential in Listeria monocytogenes. The activity of a pool of the obtained antilisterial compounds [enriched active fraction (EAF)] against Listeria monocytogenes in a food model (meat gravy) during refrigerated storage (4 °C) for 10 days was also tested and results indicated that the populations of L. monocytogenes in the food model containing the acid extract remained lower than those at time 0-day, evidencing that the acid extract of a culture of Lb. sakei 2a is a good technological alternative for the control of growth of L. monocytogenes in foods.

  4. Search for the Standard Model Higgs boson produced in association with a vector boson and decaying to bottom quarks with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Scanlon Tim

    2013-05-01

    Full Text Available This note presents an updated search with the ATLAS experiment for the Standard Model Higgs boson produced in association with a W or Z boson and decaying to bb̅, using 4.7 fb−1 of LHC data at √s = 7 TeV and 13.0 fb−1 at √s = 8 TeV. The search is performed using events containing zero, one or two electrons or muons targeting the three decay modes ZH → νν̅bb̅, WH → ℓνbb̅ and ZH → ℓ+ℓ-bb̅. No significant excess is observed. For mH = 125 GeV, the observed (expected upper limit on the cross section times the branching ratio is found to be 1.8 (1.9 times the Standard Model prediction. The production of diboson pairs, WZ and ZZ, with a Z boson decaying to bb̅, has been observed with a significance of 4.0 standard deviations at a rate compatible with the Standard Model expectation.

  5. Discounting Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....

  6. Research in Distributed Real-Time Systems

    Science.gov (United States)

    Mukkamala, R.

    1997-01-01

    This document summarizes the progress we have made on our study of issues concerning the schedulability of real-time systems. Our study has produced several results in the scalability issues of distributed real-time systems. In particular, we have used our techniques to resolve schedulability issues in distributed systems with end-to-end requirements. During the next year (1997-98), we propose to extend the current work to address the modeling and workload characterization issues in distributed real-time systems. In particular, we propose to investigate the effect of different workload models and component models on the design and the subsequent performance of distributed real-time systems.

  7. A time consistent risk averse three-stage stochastic mixed integer optimization model for power generation capacity expansion

    International Nuclear Information System (INIS)

    Pisciella, P.; Vespucci, M.T.; Bertocchi, M.; Zigrino, S.

    2016-01-01

    We propose a multi-stage stochastic optimization model for the generation capacity expansion problem of a price-taker power producer. Uncertainties regarding the evolution of electricity prices and fuel costs play a major role in long term investment decisions, therefore the objective function represents a trade-off between expected profit and risk. The Conditional Value at Risk is the risk measure used and is defined by a nested formulation that guarantees time consistency in the multi-stage model. The proposed model allows one to determine a long term expansion plan which takes into account uncertainty, while the LCoE approach, currently used by decision makers, only allows one to determine which technology should be chosen for the next power plant to be built. A sensitivity analysis is performed with respect to the risk weighting factor and budget amount. - Highlights: • We propose a time consistent risk averse multi-stage model for capacity expansion. • We introduce a case study with uncertainty on electricity prices and fuel costs. • Increased budget moves the investment from gas towards renewables and then coal. • Increased risk aversion moves the investment from coal towards renewables. • Time inconsistency leads to a profit gap between planned and implemented policies.

  8. Continuous time random walk model with asymptotical probability density of waiting times via inverse Mittag-Leffler function

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2018-04-01

    The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.

  9. Adsorption of Pb(II and Cu(II by Ginkgo-Leaf-Derived Biochar Produced under Various Carbonization Temperatures and Times

    Directory of Open Access Journals (Sweden)

    Myoung-Eun Lee

    2017-12-01

    Full Text Available Ginkgo trees are common street trees in Korea, and the large amounts of leaves that fall onto the streets annually need to be cleaned and treated. Therefore, fallen gingko leaves have been used as a raw material to produce biochar for the removal of heavy metals from solutions. Gingko-leaf-derived biochar was produced under various carbonization temperatures and times. This study evaluated the physicochemical properties and adsorption characteristics of gingko-leaf-derived biochar samples produced under different carbonization conditions regarding Pb(II and Cu(II. The biochar samples that were produced at 800 °C for 90 and 120 min contained the highest oxygen- and nitrogen-substituted carbons, which might contribute to a high metal-adsorption rate. The intensity of the phosphate bond was increased with the increasing of the carbonization temperature up to 800 °C and after 90 min of carbonization. The Pb(II and Cu(II adsorption capacities were the highest when the gingko-leaf-derived biochar was produced at 800 °C, and the removal rates were 99.2% and 34.2%, respectively. The highest removal rate was achieved when the intensity of the phosphate functional group in the biochar was the highest. Therefore, the gingko-leaf-derived biochar produced at 800 °C for 90 min can be used as an effective bio-adsorbent in the removal of metals from solutions.

  10. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1)).......Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually...... abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However...

  11. Model-based Integration of Past & Future in TimeTravel

    DEFF Research Database (Denmark)

    Khalefa, Mohamed E.; Fischer, Ulrike; Pedersen, Torben Bach

    2012-01-01

    We demonstrate TimeTravel, an efficient DBMS system for seamless integrated querying of past and (forecasted) future values of time series, allowing the user to view past and future values as one joint time series. This functionality is important for advanced application domain like energy....... The main idea is to compactly represent time series as models. By using models, the TimeTravel system answers queries approximately on past and future data with error guarantees (absolute error and confidence) one order of magnitude faster than when accessing the time series directly. In addition...... it to answer approximate and exact queries. TimeTravel is implemented into PostgreSQL, thus achieving complete user transparency at the query level. In the demo, we show the easy building of a hierarchical model index for a real-world time series and the effect of varying the error guarantees on the speed up...

  12. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Science.gov (United States)

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  13. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2013-07-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.

  14. A composite model of the space-time and 'colors'

    International Nuclear Information System (INIS)

    Terazawa, Hidezumi.

    1987-03-01

    A pregeometric and pregauge model of the space-time and ''colors'' in which the space-time metric and ''color'' gauge fields are both composite is presented. By the non-triviality of the model, the number of space-time dimensions is restricted to be not larger than the number of ''colors''. The long conjectured space-color correspondence is realized in the model action of the Nambu-Goto type which is invariant under both general-coordinate and local-gauge transformations. (author)

  15. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  16. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  17. Investigation of radiopharmaceuticals from cyclotron produced radionuclides and development of mathematical models. Part of a coordinated programme on production of radiopharmaceuticals from accelerator-produced isotopes

    International Nuclear Information System (INIS)

    Slaus, I.

    1983-04-01

    Several radioisotopes for diagnostic uses in nuclear medicine studies are produced using the internal 15 MeV (30 MeV alphas) deuteron beam of the ''Ruder Boskovic'' Institute in Zagreb, Yugoslavia. Some of the most important radioisotopes produced during the last few years are: Gallium-67 (d, xn reaction on a Cu/Ni/Zn target) with yield of 7.6 MBq/uAh, 81 Rb-sup(81m)Kr generator (α, 2n reaction on a Cu/Cu 2 Br 2 target) with a yield of 99 MBq/uAh, Iodine-123 (α, 2n reaction on a Cu/Ag/Sb target) with a yield of 6.3 MBq/uAh, and Indium-111 (α, 2n reaction on a Cu/Cu/Ag target) with a yield of 7.2 MBq/uAh. In addition, a simple mathematical lung model for regional ventilation measurements was developed and used for ventilation studies on normal subjects and subjects with various lung diseases. Based on these studies, a more sophisticated and quantitative lung ventilation model for radioactive tracer tidal breathing was further developed. In this new model, the periodicity of breathing is completely taken into account, and it makes possible to actually determine lung ventilation and volume parameters. The model is experimentally verified on healthy subjects, and the value of the effective specific ventilation obtained is in agreement with comparable parameters in the literature. sup(81m)Kr from a generator was used to perform these experimental studies

  18. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  19. Models for Pooled Time-Series Cross-Section Data

    Directory of Open Access Journals (Sweden)

    Lawrence E Raffalovich

    2015-07-01

    Full Text Available Several models are available for the analysis of pooled time-series cross-section (TSCS data, defined as “repeated observations on fixed units” (Beck and Katz 1995. In this paper, we run the following models: (1 a completely pooled model, (2 fixed effects models, and (3 multi-level/hierarchical linear models. To illustrate these models, we use a Generalized Least Squares (GLS estimator with cross-section weights and panel-corrected standard errors (with EViews 8 on the cross-national homicide trends data of forty countries from 1950 to 2005, which we source from published research (Messner et al. 2011. We describe and discuss the similarities and differences between the models, and what information each can contribute to help answer substantive research questions. We conclude with a discussion of how the models we present may help to mitigate validity threats inherent in pooled time-series cross-section data analysis.

  20. A trim-loss minimization in a produce-handling vehicle production plant

    Directory of Open Access Journals (Sweden)

    Apichai Ritvirool

    2007-01-01

    Full Text Available How to cut out the required pieces from raw materials by minimizing waste is a trim-loss problem. The integer linear programming (ILP model was developed to solve this problem. In addition, this ILPmodel could be used for planning an order over some future time period. Time horizon of ordering raw material including weekly, monthly, quarterly, and annually could be planned to reduce the trim loss. Thenumerical examples using an industrial case study of a produce-handling vehicle production plant were presented to illustrate how the proposed ILP model could be applied to actual systems and the types ofinformation that was obtained relative to implementation. The results showed that the proposed ILP model can be used as a decision support tool for selecting time horizon of order planning and cutting patterns todecrease material cost and waste from cutting raw material.

  1. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    Science.gov (United States)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  2. Can single classifiers be as useful as model ensembles to produce benthic seabed substratum maps?

    Science.gov (United States)

    Turner, Joseph A.; Babcock, Russell C.; Hovey, Renae; Kendrick, Gary A.

    2018-05-01

    Numerous machine-learning classifiers are available for benthic habitat map production, which can lead to different results. This study highlights the performance of the Random Forest (RF) classifier, which was significantly better than Classification Trees (CT), Naïve Bayes (NB), and a multi-model ensemble in terms of overall accuracy, Balanced Error Rate (BER), Kappa, and area under the curve (AUC) values. RF accuracy was often higher than 90% for each substratum class, even at the most detailed level of the substratum classification and AUC values also indicated excellent performance (0.8-1). Total agreement between classifiers was high at the broadest level of classification (75-80%) when differentiating between hard and soft substratum. However, this sharply declined as the number of substratum categories increased (19-45%) including a mix of rock, gravel, pebbles, and sand. The model ensemble, produced from the results of all three classifiers by majority voting, did not show any increase in predictive performance when compared to the single RF classifier. This study shows how a single classifier may be sufficient to produce benthic seabed maps and model ensembles of multiple classifiers.

  3. Time Optimal Reachability Analysis Using Swarm Verification

    DEFF Research Database (Denmark)

    Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand

    2016-01-01

    Time optimal reachability analysis employs model-checking to compute goal states that can be reached from an initial state with a minimal accumulated time duration. The model-checker may produce a corresponding diagnostic trace which can be interpreted as a feasible schedule for many scheduling...... and planning problems, response time optimization etc. We propose swarm verification to accelerate time optimal reachability using the real-time model-checker Uppaal. In swarm verification, a large number of model checker instances execute in parallel on a computer cluster using different, typically randomized...... search strategies. We develop four swarm algorithms and evaluate them with four models in terms scalability, and time- and memory consumption. Three of these cooperate by exchanging costs of intermediate solutions to prune the search using a branch-and-bound approach. Our results show that swarm...

  4. Modeling vector nonlinear time series using POLYMARS

    NARCIS (Netherlands)

    de Gooijer, J.G.; Ray, B.K.

    2003-01-01

    A modified multivariate adaptive regression splines method for modeling vector nonlinear time series is investigated. The method results in models that can capture certain types of vector self-exciting threshold autoregressive behavior, as well as provide good predictions for more general vector

  5. Forecasting with periodic autoregressive time series models

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1999-01-01

    textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption

  6. Lichen Parmelia sulcata time response model to environmental elemental availability

    International Nuclear Information System (INIS)

    Reis, M.A.; Alves, L.C.; Freitas, M.C.; Os, B. van; Wolterbeek, H.Th.

    2000-01-01

    Transplants of lichen Parmelia sulcata collected in an area previously identified as non polluted, were placed at six stations, five of which were near Power Plants and the other in an area expected to be a remote station. Together with the lichen transplants, two total deposition collection buckets and an aerosol sampler were installed. Lichens were recollected two every month from each station. At the same time the water collection buckets were replaced by new ones. The aerosol sampler filter was replaced every week, collection being effective only for 10 minutes out of every two hours; in the remote station aerosol filters were replaced only once a month, the collection rate being kept. Each station was run for a period of one year. Both lichens and aerosol filters were analysed by PIXE and INAA at ITN. Total deposition samples were dried under an infrared lamp, and afterwards acid digested and analysed by ICP-MS at the National Geological Survey of The Netherlands. Data for the three types of samples were then produced for a total of 16 elements. In this work we used the data set thus obtained to test a model for the time response of lichen Parmelia sulcata to a new environment. (author)

  7. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    ... their high frequency content while among TEM data sets with low frequency content, the averaging times for the FEM ellipticity were shorter than the TEM quality. Keywords: ellipticity, frequency domain, frequency electromagnetic method, model parameter, orientation error, time domain, transient electromagnetic method

  8. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  9. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    Science.gov (United States)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  10. Decoherence effect in neutrinos produced in microquasar jets

    Science.gov (United States)

    Mosquera, M. E.; Civitarese, O.

    2018-04-01

    We study the effect of decoherence upon the neutrino spectra produced in microquasar jets. In order to analyse the precession of the polarization vector of neutrinos we have calculated its time evolution by solving the corresponding equations of motion, and by assuming two different scenarios, namely: (i) the mixing between two active neutrinos, and (ii) the mixing between one active and one sterile neutrino. The results of the calculations corresponding to these scenarios show that the onset of decoherence does not depends on the activation of neutrino-neutrino interactions when realistic values of the coupling are used in the calculations. We discuss also the case of neutrinos produced in windy microquasars and compare the results which those obtained with more conventional models of microquasars.

  11. Practical Solutions for Reducing Container Ships’ Waiting Times at Ports Using Simulation Model

    Institute of Scientific and Technical Information of China (English)

    Abdorreza Sheikholeslami; Gholamreza Ilati; Yones Eftekhari Yeganeh

    2013-01-01

    The main challenge for container ports is the planning required for berthing container ships while docked in port. Growth of containerization is creating problems for ports and container terminals as they reach their capacity limits of various resources which increasingly leads to traffic and port congestion. Good planning and management of container terminal operations reduces waiting time for liner ships. Reducing the waiting time improves the terminal’s productivity and decreases the port difficulties. Two important keys to reducing waiting time with berth allocation are determining suitable access channel depths and increasing the number of berths which in this paper are studied and analyzed as practical solutions. Simulation based analysis is the only way to understand how various resources interact with each other and how they are affected in the berthing time of ships. We used the Enterprise Dynamics software to produce simulation models due to the complexity and nature of the problems. We further present case study for berth allocation simulation of the biggest container terminal in Iran and the optimum access channel depth and the number of berths are obtained from simulation results. The results show a significant reduction in the waiting time for container ships and can be useful for major functions in operations and development of container ship terminals.

  12. Forecast model of landslides in a short time

    International Nuclear Information System (INIS)

    Sanchez Lopez, Reinaldo

    2006-01-01

    The IDEAM in development of their functions as member of the national technical committee for the prevention and disasters attention (SNPAD) accomplishes the follow-up, monitoring and forecast in real time of the environmental dynamics that in extreme situations constitute threats and natural risks. One of the frequent dynamics and of greater impact is related to landslides, those that affect persistently the life of the persons, the infrastructure, the socioeconomic activities and the balance of the environment. The landslide in Colombia and in the world are caused mainly by effects of the rain, due to that, IDEAM has come developing forecast model, as an instrument for risk management in a short time. This article presents aspects related to their structure, operation, temporary space resolution, products, results, achievements and projections of the model. Conceptually, the model is support by the principle of the dynamic temporary - space, of the processes that consolidate natural hazards, particularly in areas where the man has come building the risk. Structurally, the model is composed by two sub-models; the general susceptibility of the earthly model and the critical rain model as a denotative factor, that consolidate the hazard process. In real time, the model, works as a GIS, permitting the automatic zoning of the landslides hazard for issue public advisory warming to help makers decisions on the risk that cause frequently these events, in the country

  13. A continuous-time neural model for sequential action.

    Science.gov (United States)

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Trend time-series modeling and forecasting with neural networks.

    Science.gov (United States)

    Qi, Min; Zhang, G Peter

    2008-05-01

    Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.

  15. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  16. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    Science.gov (United States)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  17. Process for producing ethanol from syngas

    Science.gov (United States)

    Krause, Theodore R; Rathke, Jerome W; Chen, Michael J

    2013-05-14

    The invention provides a method for producing ethanol, the method comprising establishing an atmosphere containing methanol forming catalyst and ethanol forming catalyst; injecting syngas into the atmosphere at a temperature and for a time sufficient to produce methanol; and contacting the produced methanol with additional syngas at a temperature and for a time sufficient to produce ethanol. The invention also provides an integrated system for producing methanol and ethanol from syngas, the system comprising an atmosphere isolated from the ambient environment; a first catalyst to produce methanol from syngas wherein the first catalyst resides in the atmosphere; a second catalyst to product ethanol from methanol and syngas, wherein the second catalyst resides in the atmosphere; a conduit for introducing syngas to the atmosphere; and a device for removing ethanol from the atmosphere. The exothermicity of the method and system obviates the need for input of additional heat from outside the atmosphere.

  18. Producing the Spielberg Brand

    OpenAIRE

    Russell, J.

    2016-01-01

    This chapter looks at the manufacture of Spielberg’s brand, and the limits of its usage. Spielberg’s directorial work is well known, but Spielberg’s identity has also been established in other ways, and I focus particularly on his work as a producer. At the time of writing, Spielberg had produced (or executive produced) 148 movies and television series across a range of genres that takes in high budget blockbusters and low budget documentaries, with many more to come. In these texts, Spielber...

  19. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana; Ait Abderrahmane, Hamid; Upadhyay, Ranjit Kumar; Kumari, Nitu

    2013-01-01

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  20. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana

    2013-05-19

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  1. Manifestation of a neuro-fuzzy model to produce landslide susceptibility map using remote sensing data derived parameters

    Science.gov (United States)

    Pradhan, Biswajeet; Lee, Saro; Buchroithner, Manfred

    Landslides are the most common natural hazards in Malaysia. Preparation of landslide suscep-tibility maps is important for engineering geologists and geomorphologists. However, due to complex nature of landslides, producing a reliable susceptibility map is not easy. In this study, a new attempt is tried to produce landslide susceptibility map of a part of Cameron Valley of Malaysia. This paper develops an adaptive neuro-fuzzy inference system (ANFIS) based on a geographic information system (GIS) environment for landslide susceptibility mapping. To ob-tain the neuro-fuzzy relations for producing the landslide susceptibility map, landslide locations were identified from interpretation of aerial photographs and high resolution satellite images, field surveys and historical inventory reports. Landslide conditioning factors such as slope, plan curvature, distance to drainage lines, soil texture, lithology, and distance to lineament were extracted from topographic, soil, and lineament maps. Landslide susceptible areas were analyzed by the ANFIS model and mapped using the conditioning factors. Furthermore, we applied various membership functions (MFs) and fuzzy relations to produce landslide suscep-tibility maps. The prediction performance of the susceptibility map is checked by considering actual landslides in the study area. Results show that, triangular, trapezoidal, and polynomial MFs were the best individual MFs for modelling landslide susceptibility maps (86

  2. Stochastic Models in the DORIS Position Time Series: Estimates from the IDS Contribution to the ITRF2014

    Science.gov (United States)

    Klos, A.; Bogusz, J.; Moreaux, G.

    2017-12-01

    This research focuses on the investigation of the deterministic and stochastic parts of the DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) weekly coordinate time series from the IDS contribution to the ITRF2014A set of 90 stations was divided into three groups depending on when the data was collected at an individual station. To reliably describe the DORIS time series, we employed a mathematical model that included the long-term nonlinear signal, linear trend, seasonal oscillations (these three sum up to produce the Polynomial Trend Model) and a stochastic part, all being resolved with Maximum Likelihood Estimation (MLE). We proved that the values of the parameters delivered for DORIS data are strictly correlated with the time span of the observations, meaning that the most recent data are the most reliable ones. Not only did the seasonal amplitudes decrease over the years, but also, and most importantly, the noise level and its type changed significantly. We examined five different noise models to be applied to the stochastic part of the DORIS time series: a pure white noise (WN), a pure power-law noise (PL), a combination of white and power-law noise (WNPL), an autoregressive process of first order (AR(1)) and a Generalized Gauss Markov model (GGM). From our study it arises that the PL process may be chosen as the preferred one for most of the DORIS data. Moreover, the preferred noise model has changed through the years from AR(1) to pure PL with few stations characterized by a positive spectral index.

  3. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    International Nuclear Information System (INIS)

    Fu, Y; Xu, O; Yang, W; Zhou, L; Wang, J

    2017-01-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately. (paper)

  4. A mathematical model for surface roughness of fluidic channels produced by grinding aided electrochemical discharge machining (G-ECDM

    Directory of Open Access Journals (Sweden)

    Ladeesh V. G.

    2017-01-01

    Full Text Available Grinding aided electrochemical discharge machining is a hybrid technique, which combines the grinding action of an abrasive tool and thermal effects of electrochemical discharges to remove material from the workpiece for producing complex contours. The present study focuses on developing fluidic channels on borosilicate glass using G-ECDM and attempts to develop a mathematical model for surface roughness of the machined channel. Preliminary experiments are conducted to study the effect of machining parameters on surface roughness. Voltage, duty factor, frequency and tool feed rate are identified as the significant factors for controlling surface roughness of the channels produced by G-ECDM. A mathematical model was developed for surface roughness by considering the grinding action and thermal effects of electrochemical discharges in material removal. Experiments are conducted to validate the model and the results obtained are in good agreement with that predicted by the model.

  5. Time resolved diagnostics and kinetic modelling of a modulated hollow cathode discharge of NO2

    International Nuclear Information System (INIS)

    Castillo, M; Herrero, V J; Mendez, I; Tanarro, I

    2004-01-01

    The transients associated with the ignition and the extinction of the cold plasma produced in a low frequency, square-wave modulated, hollow cathode discharge of nitrogen dioxide are characterized by time resolved emission spectroscopy, mass spectrometry and electrical probes. The temporal evolution of the concentrations of neutral species created or destroyed in the NO 2 discharges are compared with the predictions of a simple kinetic model previously developed for discharges of other nitrogen oxides (N 2 O and NO). The physical conditions of pressure, gas flow rate, modulation frequency and electrical current in the NO 2 plasma were selected in order to highlight the time-dependent behaviour of some of the stable species formed in the discharge, especially the nitrogen oxide products, whose concentrations show transient maxima. The usefulness of the analysis of the transient results is emphasized as a means to evaluate the relevance of the different elementary processes and as a key to estimate the values of some of the rate constants critical to the modelling. This work is dedicated to the memory of Professor Jose Campos

  6. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    International Nuclear Information System (INIS)

    Lovejoy, S.; Lima, M. I. P. de

    2015-01-01

    Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time

  7. Robust Forecasting of Non-Stationary Time Series

    NARCIS (Netherlands)

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable

  8. Continuous time modeling of panel data by means of SEM

    NARCIS (Netherlands)

    Oud, J.H.L.; Delsing, M.J.M.H.; Montfort, C.A.G.M.; Oud, J.H.L.; Satorra, A.

    2010-01-01

    After a brief history of continuous time modeling and its implementation in panel analysis by means of structural equation modeling (SEM), the problems of discrete time modeling are discussed in detail. This is done by means of the popular cross-lagged panel design. Next, the exact discrete model

  9. Stochastic time-dependent vehicle routing problem: Mathematical models and ant colony algorithm

    Directory of Open Access Journals (Sweden)

    Zhengyu Duan

    2015-11-01

    Full Text Available This article addresses the stochastic time-dependent vehicle routing problem. Two mathematical models named robust optimal schedule time model and minimum expected schedule time model are proposed for stochastic time-dependent vehicle routing problem, which can guarantee delivery within the time windows of customers. The robust optimal schedule time model only requires the variation range of link travel time, which can be conveniently derived from historical traffic data. In addition, the robust optimal schedule time model based on robust optimization method can be converted into a time-dependent vehicle routing problem. Moreover, an ant colony optimization algorithm is designed to solve stochastic time-dependent vehicle routing problem. As the improvements in initial solution and transition probability, ant colony optimization algorithm has a good performance in convergence. Through computational instances and Monte Carlo simulation tests, robust optimal schedule time model is proved to be better than minimum expected schedule time model in computational efficiency and coping with the travel time fluctuations. Therefore, robust optimal schedule time model is applicable in real road network.

  10. Integrated vendor-buyer inventory models with inflation and time value of money in controllable lead time

    Directory of Open Access Journals (Sweden)

    Prashant Jindal

    2016-01-01

    Full Text Available In the global critical economic scenario, inflation plays a vital role in deciding optimal pricing of goods in any business entity. This article presents two single-vendor single-buyer integrated supply chain inventory models with inflation and time value of money. Shortage is allowed during the lead-time and it is partially backlogged. Lead time is controllable and can be reduced using crashing cost. In the first model, we consider the demand of lead time follows a normal distribution, and in the second model, it is considered distribution-free. For both cases, our objective is to minimize the integrated system cost by simultaneously optimizing the order quantity, safety factor, lead time and number of lots. The discounted cash flow and classical optimization technique are used to derive the optimal solution for both cases. Numerical examples including the sensitivity analysis of system parameters is provided to validate the results of the supply chain models.

  11. Stability Analysis and H∞ Model Reduction for Switched Discrete-Time Time-Delay Systems

    Directory of Open Access Journals (Sweden)

    Zheng-Fan Liu

    2014-01-01

    Full Text Available This paper is concerned with the problem of exponential stability and H∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs. For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results.

  12. Matrix model and time-like linear dila ton matter

    International Nuclear Information System (INIS)

    Takayanagi, Tadashi

    2004-01-01

    We consider a matrix model description of the 2d string theory whose matter part is given by a time-like linear dilaton CFT. This is equivalent to the c=1 matrix model with a deformed, but very simple Fermi surface. Indeed, after a Lorentz transformation, the corresponding 2d spacetime is a conventional linear dila ton background with a time-dependent tachyon field. We show that the tree level scattering amplitudes in the matrix model perfectly agree with those computed in the world-sheet theory. The classical trajectories of fermions correspond to the decaying D-boranes in the time-like linear dilaton CFT. We also discuss the ground ring structure. Furthermore, we study the properties of the time-like Liouville theory by applying this matrix model description. We find that its ground ring structure is very similar to that of the minimal string. (author)

  13. A New Battery Energy Storage Charging/Discharging Scheme for Wind Power Producers in Real-Time Markets

    Directory of Open Access Journals (Sweden)

    Minh Y Nguyen

    2012-12-01

    Full Text Available Under a deregulated environment, wind power producers are subject to many regulation costs due to the intermittence of natural resources and the accuracy limits of existing prediction tools. This paper addresses the operation (charging/discharging problem of battery energy storage installed in a wind generation system in order to improve the value of wind power in the real-time market. Depending on the prediction of market prices and the probabilistic information of wind generation, wind power producers can schedule the battery energy storage for the next day in order to maximize the profit. In addition, by taking into account the expenses of using batteries, the proposed charging/discharging scheme is able to avoid the detrimental operation of battery energy storage which can lead to a significant reduction of battery lifetime, i.e., uneconomical operation. The problem is formulated in a dynamic programming framework and solved by a dynamic programming backward algorithm. The proposed scheme is then applied to the study cases, and the results of simulation show its effectiveness.

  14. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system.

    Science.gov (United States)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  15. Real-Time System for Water Modeling and Management

    Science.gov (United States)

    Lee, J.; Zhao, T.; David, C. H.; Minsker, B.

    2012-12-01

    Working closely with the Texas Commission on Environmental Quality (TCEQ) and the University of Texas at Austin (UT-Austin), we are developing a real-time system for water modeling and management using advanced cyberinfrastructure, data integration and geospatial visualization, and numerical modeling. The state of Texas suffered a severe drought in 2011 that cost the state $7.62 billion in agricultural losses (crops and livestock). Devastating situations such as this could potentially be avoided with better water modeling and management strategies that incorporate state of the art simulation and digital data integration. The goal of the project is to prototype a near-real-time decision support system for river modeling and management in Texas that can serve as a national and international model to promote more sustainable and resilient water systems. The system uses National Weather Service current and predicted precipitation data as input to the Noah-MP Land Surface model, which forecasts runoff, soil moisture, evapotranspiration, and water table levels given land surface features. These results are then used by a river model called RAPID, along with an error model currently under development at UT-Austin, to forecast stream flows in the rivers. Model forecasts are visualized as a Web application for TCEQ decision makers, who issue water diversion (withdrawal) permits and any needed drought restrictions; permit holders; and reservoir operation managers. Users will be able to adjust model parameters to predict the impacts of alternative curtailment scenarios or weather forecasts. A real-time optimization system under development will help TCEQ to identify optimal curtailment strategies to minimize impacts on permit holders and protect health and safety. To develop the system we have implemented RAPID as a remotely-executed modeling service using the Cyberintegrator workflow system with input data downloaded from the North American Land Data Assimilation System. The

  16. Continuum-time Hamiltonian for the Baxter's model

    International Nuclear Information System (INIS)

    Libero, V.L.

    1983-01-01

    The associated Hamiltonian for the symmetric eight-vertex model is obtained by taking the time-continuous limit in an equivalent Ashkin-Teller model. The result is a Heisenberg Hamiltonian with coefficients J sub(x), J sub(y) and J sub(z) identical to those found by Sutherland for choices of the parameters a, b, c and d that bring the model close to the transition. The change in the operators is accomplished explicitly, the relation between the crossover operator for the Ashkin-Teller model and the energy operator for the eight-vertex model being obtained in a transparent form. (Author) [pt

  17. Modeling polar cap F-region patches using time varying convection

    International Nuclear Information System (INIS)

    Sojka, J.J.; Bowline, M.D.; Schunk, R.W.; Decker, D.T.; Valladares, C.E.; Sheehan, R.; Anderson, D.N.; Heelis, R.A.

    1993-01-01

    Here the authors present the results of computerized simulations of the polar cap regions which were able to model the formation of polar cap patches. They used the Utah State University Time-Dependent Ionospheric Model (TDIM) and the Phillips Laboratory (PL) F-region models in this work. By allowing a time varying magnetospheric electric field in the models, they were able to generate the patches. This time varying field generates a convection in the ionosphere. This convection is similar to convective changes observed in the ionosphere at times of southward pointing interplanetary magnetic field, due to changes in the B y component of the IMF

  18. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  19. On the choice of the demand and hydraulic modeling approach to WDN real-time simulation

    Science.gov (United States)

    Creaco, Enrico; Pezzinga, Giuseppe; Savic, Dragan

    2017-07-01

    This paper aims to analyze two demand modeling approaches, i.e., top-down deterministic (TDA) and bottom-up stochastic (BUA), with particular reference to their impact on the hydraulic modeling of water distribution networks (WDNs). In the applications, the hydraulic modeling is carried out through the extended period simulation (EPS) and unsteady flow modeling (UFM). Taking as benchmark the modeling conditions that are closest to the WDN's real operation (UFM + BUA), the analysis showed that the traditional use of EPS + TDA produces large pressure head and water discharge errors, which can be attenuated only when large temporal steps (up to 1 h in the case study) are used inside EPS. The use of EPS + BUA always yields better results. Indeed, EPS + BUA already gives a good approximation of the WDN's real operation when intermediate temporal steps (larger than 2 min in the case study) are used for the simulation. The trade-off between consistency of results and computational burden makes EPS + BUA the most suitable tool for real-time WDN simulation, while benefitting from data acquired through smart meters for the parameterization of demand generation models.

  20. Modeling information diffusion in time-varying community networks

    Science.gov (United States)

    Cui, Xuelian; Zhao, Narisa

    2017-12-01

    Social networks are rarely static, and they typically have time-varying network topologies. A great number of studies have modeled temporal networks and explored social contagion processes within these models; however, few of these studies have considered community structure variations. In this paper, we present a study of how the time-varying property of a modular structure influences the information dissemination. First, we propose a continuous-time Markov model of information diffusion where two parameters, mobility rate and community attractiveness, are introduced to address the time-varying nature of the community structure. The basic reproduction number is derived, and the accuracy of this model is evaluated by comparing the simulation and theoretical results. Furthermore, numerical results illustrate that generally both the mobility rate and community attractiveness significantly promote the information diffusion process, especially in the initial outbreak stage. Moreover, the strength of this promotion effect is much stronger when the modularity is higher. Counterintuitively, it is found that when all communities have the same attractiveness, social mobility no longer accelerates the diffusion process. In addition, we show that the local spreading in the advantage group has been greatly enhanced due to the agglomeration effect caused by the social mobility and community attractiveness difference, which thus increases the global spreading.

  1. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    Science.gov (United States)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  2. SEM Based CARMA Time Series Modeling for Arbitrary N.

    Science.gov (United States)

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  3. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  4. Modelling endurance and resumption times for repetitive one-hand pushing.

    Science.gov (United States)

    Rose, Linda M; Beauchemin, Catherine A A; Neumann, W Patrick

    2018-07-01

    This study's objective was to develop models of endurance time (ET), as a function of load level (LL), and of resumption time (RT) after loading as a function of both LL and loading time (LT) for repeated loadings. Ten male participants with experience in construction work each performed 15 different one-handed repetaed pushing tasks at shoulder height with varied exerted force and duration. These data were used to create regression models predicting ET and RT. It is concluded that power law relationships are most appropriate to use when modelling ET and RT. While the data the equations are based on are limited regarding number of participants, gender, postures, magnitude and type of exerted force, the paper suggests how this kind of modelling can be used in job design and in further research. Practitioner Summary: Adequate muscular recovery during work-shifts is important to create sustainable jobs. This paper describes mathematical modelling and presents models for endurance times and resumption times (an aspect of recovery need), based on data from an empirical study. The models can be used to help manage fatigue levels in job design.

  5. Modeling of wear behavior of Al/B_4C composites produced by powder metallurgy

    International Nuclear Information System (INIS)

    Sahin, Ismail; Bektas, Asli; Guel, Ferhat; Cinci, Hanifi

    2017-01-01

    Wear characteristics of composites, Al matrix reinforced with B_4C particles percentages of 5, 10,15 and 20 produced by the powder metallurgy method were studied in this study. For this purpose, a mixture of Al and B_4C powders were pressed under 650 MPa pressure and then sintered at 635 C. The analysis of hardness, density and microstructure was performed. The produced samples were worn using a pin-on-disk abrasion device under 10, 20 and 30 N load through 500, 800 and 1200 mesh SiC abrasive papers. The obtained wear values were implemented in an artificial neural network (ANN) model having three inputs and one output using feed forward backpropagation Levenberg-Marquardt algorithm. Thus, the optimum wear conditions and hardness values were determined.

  6. Universe before Planck time: A quantum gravity model

    International Nuclear Information System (INIS)

    Padmanabhan, T.

    1983-01-01

    A model for quantum gravity can be constructed by treating the conformal degree of freedom of spacetime as a quantum variable. An isotropic, homogeneous cosmological solution in this quantum gravity model is presented. The spacetime is nonsingular for all the three possible values of three-space curvature, and agrees with the classical solution for time scales larger than the Planck time scale. A possibility of quantum fluctuations creating the matter in the universe is suggested

  7. Validation of a novel cost effective easy to produce and durable in vitro model for kidney-puncture and PNL-Simulation.

    Science.gov (United States)

    Klein, Jan Thorsten; Rassweiler, Jens; Rassweiler-Seyfried, Marie-Claire Charlotte

    2018-03-29

    Nephrolithiasis is one of the most common diseases in urology. According to the EAU Guidelines, a percutaneous nephrolitholapaxy (PNL) is recommended when treating a kidney stone >2 cm. Nowadays PNL is performed even for smaller stones (PNL is the puncture of the planned site. PNL-novice surgeons need to practice this step in a safe environment with an ideal training model. We developed and evaluated a new, easy to produce, in-vitro model for the training of the freehand puncture of the kidney. Porcine kidneys with ureters were embedded in ballistic gel. Food coloring and preservative agent were added. We used the standard imaging modalities of X-ray and ultrasound to validate the training model. An additional new technique, the iPAD guided puncture, was evaluated. Five novices and three experts conducted 12 punctures for each imaging technique. Puncture time, radiation dose, and number of attempts to a successful puncture were measured. Mann-Whitney-U, Kruskal-Wallis, and U-Tests were used for statistical analyses. The sonographic guided puncture is slightly but not significantly faster than the fluoroscopic guided puncture and the iPAD assisted puncture. Similarly, the most experienced surgeon's time for a successful puncture was slightly less than that of the residents, and the experienced surgeons needed the least attempts to perform a successful puncture. In terms of radiation exposure, the residents had a significant reduction of radiation exposure compared to the experienced surgeons. The newly developed ballistic gel kidney-puncture model is a good training tool for a variety of kidney puncture techniques, with good content, construct, and face validity.

  8. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; Heijden, van der M.C.

    1996-01-01

    In many multi-echelon inventory systems the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process which generates such lead times is usually not

  9. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; van der Heijden, M.C.

    1997-01-01

    In many multi-echelon inventory systems, the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process that generates such lead times is usually not well

  10. Neutrino flavor instabilities in a time-dependent supernova model

    Energy Technology Data Exchange (ETDEWEB)

    Abbar, Sajad; Duan, Huaiyu, E-mail: duan@unm.edu

    2015-12-17

    A dense neutrino medium such as that inside a core-collapse supernova can experience collective flavor conversion or oscillations because of the neutral-current weak interaction among the neutrinos. This phenomenon has been studied in a restricted, stationary supernova model which possesses the (spatial) spherical symmetry about the center of the supernova and the (directional) axial symmetry around the radial direction. Recently it has been shown that these spatial and directional symmetries can be broken spontaneously by collective neutrino oscillations. In this letter we analyze the neutrino flavor instabilities in a time-dependent supernova model. Our results show that collective neutrino oscillations start at approximately the same radius in both the stationary and time-dependent supernova models unless there exist very rapid variations in local physical conditions on timescales of a few microseconds or shorter. Our results also suggest that collective neutrino oscillations can vary rapidly with time in the regimes where they do occur which need to be studied in time-dependent supernova models.

  11. Details Matter: Noise and Model Structure Set the Relationship between Cell Size and Cell Cycle Timing

    Directory of Open Access Journals (Sweden)

    Felix Barber

    2017-11-01

    Full Text Available Organisms across all domains of life regulate the size of their cells. However, the means by which this is done is poorly understood. We study two abstracted “molecular” models for size regulation: inhibitor dilution and initiator accumulation. We apply the models to two settings: bacteria like Escherichia coli, that grow fully before they set a division plane and divide into two equally sized cells, and cells that form a bud early in the cell division cycle, confine new growth to that bud, and divide at the connection between that bud and the mother cell, like the budding yeast Saccharomyces cerevisiae. In budding cells, delaying cell division until buds reach the same size as their mother leads to very weak size control, with average cell size and standard deviation of cell size increasing over time and saturating up to 100-fold higher than those values for cells that divide when the bud is still substantially smaller than its mother. In budding yeast, both inhibitor dilution or initiator accumulation models are consistent with the observation that the daughters of diploid cells add a constant volume before they divide. This “adder” behavior has also been observed in bacteria. We find that in bacteria an inhibitor dilution model produces adder correlations that are not robust to noise in the timing of DNA replication initiation or in the timing from initiation of DNA replication to cell division (the C+D period. In contrast, in bacteria an initiator accumulation model yields robust adder correlations in the regime where noise in the timing of DNA replication initiation is much greater than noise in the C + D period, as reported previously (Ho and Amir, 2015. In bacteria, division into two equally sized cells does not broaden the size distribution.

  12. Producing design objects from regular polyhedra: A Pratical approach

    OpenAIRE

    Polimeni, Beniamino

    2017-01-01

    In the last few years, digital modeling techniques have played a major role in Architecture and design, influencing, at the same time, the creative process and the way the objects are fabricated. This revolution has produced a new fertile generation of architects and designers focused on the expanding possibilities of material and formal production reinforcing the idea of architecture as an interaction between art and artisanship. This innovative perspective inspires this paper, w...

  13. Robust model predictive control for constrained continuous-time nonlinear systems

    Science.gov (United States)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  14. Worldwide dispersion and deposition of radionuclides produced in atmospheric tests.

    Science.gov (United States)

    Bennett, Burton G

    2002-05-01

    Radionuclides produced in atmospheric nuclear tests were widely dispersed in the global environment. From the many measurements of the concentrations in air and the deposition amounts, much was learned of atmospheric circulation and environmental processes. Based on these results and the reported fission and total yields of individual tests, it has been possible to devise an empirical model of the movement and residence times of particles in the various atmospheric regions. This model, applied to all atmospheric weapons tests, allows extensive calculations of air concentrations and deposition amounts for the entire range of radionuclides produced throughout the testing period. Especially for the shorter-lived fission radionuclides, for which measurement results at the time of the tests are less extensive, a more complete picture of levels and isotope ratios can be obtained, forming a basis for improved dose estimations. The contributions to worldwide fallout can be inferred from individual tests, from tests at specific sites, or by specific countries. Progress was also made in understanding the global hydrological and carbon cycles from the tritium and 14C measurements. A review of the global measurements and modeling results is presented in this paper. In the future, if injections of materials into the atmosphere occur, their anticipated motions and fates can be predicted from the knowledge gained from the fallout experience.

  15. Modeling Space-Time Dependent Helium Bubble Evolution in Tungsten Armor under IFE Conditions

    International Nuclear Information System (INIS)

    Qiyang Hu; Shahram Sharafat; Nasr Ghoniem

    2006-01-01

    The High Average Power Laser (HAPL) program is a coordinated effort to develop Laser Inertial Fusion Energy. The implosion of the D-T target produces a spectrum of neutrons, X-rays, and charged particles, which arrive at the first wall (FW) at different times within about 2.5 μs at a frequency of 5 to 10 Hz. Helium is one of several high-energy charged particle constituents impinging on the candidate tungsten armored low activation ferritic steel First Wall. The spread of the implanted debris and burn helium energies results in a unique space-time dependent implantation profile that spans about 10 μm in tungsten. Co-implantation of X-rays and other ions results in spatially dependent damage profiles and rapid space-time dependent temperature spikes and gradients. The rate of helium transport and helium bubble formation will vary significantly throughout the implanted region. Furthermore, helium will also be transported via the migration of helium bubbles and non-equilibrium helium-vacancy clusters. The HEROS code was developed at UCLA to model the spatial and time-dependent helium bubble nucleation, growth, coalescence, and migration under transient damage rates and transient temperature gradients. The HEROS code is based on kinetic rate theory, which includes clustering of helium and vacancies, helium mobility, helium-vacancy cluster stability, cavity nucleation and growth and other microstructural features such as interstitial loop evolution, grain boundaries, and precipitates. The HEROS code is based on space-time discretization of reaction-diffusion type equations to account for migration of mobile species between neighboring bins as single atoms, clusters, or bubbles. HAPL chamber FW implantation conditions are used to model helium bubble evolution in the implanted tungsten. Helium recycling rate predictions are compared with experimental results of helium ion implantation experiments. (author)

  16. Producers give prices a boost

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Uranium producers came alive in August, helping spot prices crack the $8.00 barrier for the first time since March. The upper end of NUKEM's price range actually finished the month at $8.20. Scrambling to fulfill their long-term delivery contracts, producers dominate the market. In the span of three weeks, five producers came out for 2 million lbs U3O8, ultimately buying nearly 1.5 million lbs. One producer accounted for over half this volume. The major factor behind rising prices was that producers required specific origins to meet contract obligations. Buyers willing to accept open origins created the lower end of NUKEM's price range

  17. Nowcasting, forecasting and hindcasting Harvey and Irma inundation in near-real time using a continental 2D hydrodynamic model

    Science.gov (United States)

    Sampson, C. C.; Wing, O.; Quinn, N.; Smith, A.; Neal, J. C.; Schumann, G.; Bates, P.

    2017-12-01

    During an ongoing natural disaster data are required on: (1) the current situation (nowcast); (2) its likely immediate evolution (forecast); and (3) a consistent view post-event of what actually happened (hindcast or reanalysis). We describe methods used to achieve all three tasks for flood inundation during the Harvey and Irma events using a continental scale 2D hydrodynamic model (Wing et al., 2017). The model solves the local inertial form of the Shallow Water equations over a regular grid of 1 arcsecond ( 30m). Terrain data are taken from the USGS National Elevation Dataset with known flood defences represented using the U.S. Army Corps of Engineers National Levee Dataset. Channels are treated as sub-grid scale features using the HydroSHEDS global hydrography data set. The model is driven using river flows, rainfall and coastal water levels. It simulates river flooding in basins > 50 km2, and fluvial and coastal flooding everywhere. Previous wide area validation tests show this model to be capable of matching FEMA maps and USGS local models built with bespoke data with hit rates of 86% and 92% respectively (Wing et al., 2017). Boundary conditions were taken from NOAA QPS data to produce nowcast and forecast simulations in near real time, before updating with NOAA observations to produce the hindcast. During the event simulation results were supplied to major insurers and multi-nationals who used them to estimate their likely capital exposure and to mitigate flood damage to their infrastructure whilst the event was underway. Simulations were validated against modelled flood footprints computed by FEMA and USACE, and composite satellite imagery produced by the Dartmouth Flood Observatory. For the Harvey event, hit rates ranged from 60-84% against these data sources, but a lack of metadata meant it was difficult to perform like-for-like comparisons. The satellite data also appeared to miss known flooding in urban areas that was picked up in the models. Despite

  18. Modelling and finite-time stability analysis of psoriasis pathogenesis

    Science.gov (United States)

    Oza, Harshal B.; Pandey, Rakesh; Roper, Daniel; Al-Nuaimi, Yusur; Spurgeon, Sarah K.; Goodfellow, Marc

    2017-08-01

    A new systems model of psoriasis is presented and analysed from the perspective of control theory. Cytokines are treated as actuators to the plant model that govern the cell population under the reasonable assumption that cytokine dynamics are faster than the cell population dynamics. The analysis of various equilibria is undertaken based on singular perturbation theory. Finite-time stability and stabilisation have been studied in various engineering applications where the principal paradigm uses non-Lipschitz functions of the states. A comprehensive study of the finite-time stability properties of the proposed psoriasis dynamics is carried out. It is demonstrated that the dynamics are finite-time convergent to certain equilibrium points rather than asymptotically or exponentially convergent. This feature of finite-time convergence motivates the development of a modified version of the Michaelis-Menten function, frequently used in biology. This framework is used to model cytokines as fast finite-time actuators.

  19. Evaluation of an Improved U.S. Food and Drug Administration Method for the Detection of Cyclospora cayetanensis in Produce Using Real-Time PCR.

    Science.gov (United States)

    Murphy, Helen R; Lee, Seulgi; da Silva, Alexandre J

    2017-07-01

    Cyclospora cayetanensis is a protozoan parasite that causes human diarrheal disease associated with the consumption of fresh produce or water contaminated with C. cayetanensis oocysts. In the United States, foodborne outbreaks of cyclosporiasis have been linked to various types of imported fresh produce, including cilantro and raspberries. An improved method was developed for identification of C. cayetanensis in produce at the U.S. Food and Drug Administration. The method relies on a 0.1% Alconox produce wash solution for efficient recovery of oocysts, a commercial kit for DNA template preparation, and an optimized TaqMan real-time PCR assay with an internal amplification control for molecular detection of the parasite. A single laboratory validation study was performed to assess the method's performance and compare the optimized TaqMan real-time PCR assay and a reference nested PCR assay by examining 128 samples. The samples consisted of 25 g of cilantro or 50 g of raspberries seeded with 0, 5, 10, or 200 C. cayetanensis oocysts. Detection rates for cilantro seeded with 5 and 10 oocysts were 50.0 and 87.5%, respectively, with the real-time PCR assay and 43.7 and 94.8%, respectively, with the nested PCR assay. Detection rates for raspberries seeded with 5 and 10 oocysts were 25.0 and 75.0%, respectively, with the real-time PCR assay and 18.8 and 68.8%, respectively, with the nested PCR assay. All unseeded samples were negative, and all samples seeded with 200 oocysts were positive. Detection rates using the two PCR methods were statistically similar, but the real-time PCR assay is less laborious and less prone to amplicon contamination and allows monitoring of amplification and analysis of results, making it more attractive to diagnostic testing laboratories. The improved sample preparation steps and the TaqMan real-time PCR assay provide a robust, streamlined, and rapid analytical procedure for surveillance, outbreak response, and regulatory testing of foods for

  20. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1

  1. Interevent times in a new alarm-based earthquake forecasting model

    Science.gov (United States)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  2. Evaluation of Time-Temperature Integrators (TTIs) with Microorganism-Entrapped Microbeads Produced Using Homogenization and SPG Membrane Emulsification Techniques.

    Science.gov (United States)

    Rahman, A T M Mijanur; Lee, Seung Ju; Jung, Seung Won

    2015-12-28

    A comparative study was conducted to evaluate precision and accuracy in controlling the temperature dependence of encapsulated microbial time-temperature integrators (TTIs) developed using two different emulsification techniques. Weissela cibaria CIFP 009 cells, immobilized within 2% Na-alginate gel microbeads using homogenization (5,000, 7,000, and 10,000 rpm) and Shirasu porous glass (SPG) membrane technologies (10 μm), were applied to microbial TTIs. The prepared micobeads were characterized with respect to their size, size distribution, shape and morphology, entrapment efficiency, and bead production yield. Additionally, fermentation process parameters including growth rate were investigated. The TTI responses (changes in pH and titratable acidity (TA)) were evaluated as a function of temperature (20°C, 25°C, and 30°C). In comparison with conventional methods, SPG membrane technology was able not only to produce highly uniform, small-sized beads with the narrowest size distribution, but also the bead production yield was found to be nearly 3.0 to 4.5 times higher. However, among the TTIs produced using the homogenization technique, poor linearity (R(2)) in terms of TA was observed for the 5,000 and 7,000 rpm treatments. Consequently, microbeads produced by the SPG membrane and by homogenization at 10,000 rpm were selected for adjusting the temperature dependence. The Ea values of TTIs containing 0.5, 1.0, and 1.5 g microbeads, prepared by SPG membrane and conventional methods, were estimated to be 86.0, 83.5, and 76.6 kJ/mol, and 85.5, 73.5, and 62.2 kJ/mol, respectively. Therefore, microbial TTIs developed using SPG membrane technology are much more efficient in controlling temperature dependence.

  3. A model for quantification of temperature profiles via germination times

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Adolf, Verena Isabelle; Jacobsen, Sven-Erik

    2013-01-01

    Current methodology to quantify temperature characteristics in germination of seeds is predominantly based on analysis of the time to reach a given germination fraction, that is, the quantiles in the distribution of the germination time of a seed. In practice interpolation between observed...... time and a specific type of accelerated failure time models is provided. As a consequence the observed number of germinated seeds at given monitoring times may be analysed directly by a grouped time-to-event model from which characteristics of the temperature profile may be identified and estimated...... germination fractions at given monitoring times is used to obtain the time to reach a given germination fraction. As a consequence the obtained value will be highly dependent on the actual monitoring scheme used in the experiment. In this paper a link between currently used quantile models for the germination...

  4. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  5. Modelling nematode movement using time-fractional dynamics.

    Science.gov (United States)

    Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M

    2007-09-07

    We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.

  6. Parameterizing unconditional skewness in models for financial time series

    DEFF Research Database (Denmark)

    He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...

  7. Time inconsistency and reputation in monetary policy: a strategic model in continuous time

    OpenAIRE

    Li, Jingyuan; Tian, Guoqiang

    2005-01-01

    This article develops a model to examine the equilibrium behavior of the time inconsistency problem in a continuous time economy with stochastic and endogenized dis- tortion. First, the authors introduce the notion of sequentially rational equilibrium, and show that the time inconsistency problem may be solved with trigger reputation strategies for stochastic setting. The conditions for the existence of sequentially rational equilibrium are provided. Then, the concept of sequen...

  8. Enhanced intracellular delivery of a model drug using microbubbles produced by a microfluidic device.

    Science.gov (United States)

    Dixon, Adam J; Dhanaliwala, Ali H; Chen, Johnny L; Hossack, John A

    2013-07-01

    Focal drug delivery to a vessel wall facilitated by intravascular ultrasound and microbubbles holds promise as a potential therapy for atherosclerosis. Conventional methods of microbubble administration result in rapid clearance from the bloodstream and significant drug loss. To address these limitations, we evaluated whether drug delivery could be achieved with transiently stable microbubbles produced in real time and in close proximity to the therapeutic site. Rat aortic smooth muscle cells were placed in a flow chamber designed to simulate physiological flow conditions. A flow-focusing microfluidic device produced 8 μm diameter monodisperse microbubbles within the flow chamber, and ultrasound was applied to enhance uptake of a surrogate drug (calcein). Acoustic pressures up to 300 kPa and flow rates up to 18 mL/s were investigated. Microbubbles generated by the flow-focusing microfluidic device were stabilized with a polyethylene glycol-40 stearate shell and had either a perfluorobutane (PFB) or nitrogen gas core. The gas core composition affected stability, with PFB and nitrogen microbubbles exhibiting half-lives of 40.7 and 18.2 s, respectively. Calcein uptake was observed at lower acoustic pressures with nitrogen microbubbles (100 kPa) than with PFB microbubbles (200 kPa) (p 3). In addition, delivery was observed at all flow rates, with maximal delivery (>70% of cells) occurring at a flow rate of 9 mL/s. These results demonstrate the potential of transiently stable microbubbles produced in real time and in close proximity to the intended therapeutic site for enhancing localized drug delivery. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  9. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    Science.gov (United States)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  10. Reservoir theory, groundwater transit time distributions, and lumped parameter models

    International Nuclear Information System (INIS)

    Etcheverry, D.; Perrochet, P.

    1999-01-01

    The relation between groundwater residence times and transit times is given by the reservoir theory. It allows to calculate theoretical transit time distributions in a deterministic way, analytically, or on numerical models. Two analytical solutions validates the piston flow and the exponential model for simple conceptual flow systems. A numerical solution of a hypothetical regional groundwater flow shows that lumped parameter models could be applied in some cases to large-scale, heterogeneous aquifers. (author)

  11. Verification of models for ballistic movement time and endpoint variability.

    Science.gov (United States)

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  12. Modeling and Understanding Time-Evolving Scenarios

    Directory of Open Access Journals (Sweden)

    Riccardo Melen

    2015-08-01

    Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.

  13. Time representation in reinforcement learning models of the basal ganglia

    Directory of Open Access Journals (Sweden)

    Samuel Joseph Gershman

    2014-01-01

    Full Text Available Reinforcement learning models have been influential in understanding many aspects of basal ganglia function, from reward prediction to action selection. Time plays an important role in these models, but there is still no theoretical consensus about what kind of time representation is used by the basal ganglia. We review several theoretical accounts and their supporting evidence. We then discuss the relationship between reinforcement learning models and the timing mechanisms that have been attributed to the basal ganglia. We hypothesize that a single computational system may underlie both reinforcement learning and interval timing—the perception of duration in the range of seconds to hours. This hypothesis, which extends earlier models by incorporating a time-sensitive action selection mechanism, may have important implications for understanding disorders like Parkinson's disease in which both decision making and timing are impaired.

  14. Time-varying parameter models for catchments with land use change: the importance of model structure

    Science.gov (United States)

    Pathiraja, Sahani; Anghileri, Daniela; Burlando, Paolo; Sharma, Ashish; Marshall, Lucy; Moradkhani, Hamid

    2018-05-01

    Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.

  15. Time-varying parameter models for catchments with land use change: the importance of model structure

    Directory of Open Access Journals (Sweden)

    S. Pathiraja

    2018-05-01

    Full Text Available Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2 in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.

  16. Motivation and timing: clues for modeling the reward system.

    Science.gov (United States)

    Galtress, Tiffany; Marshall, Andrew T; Kirkpatrick, Kimberly

    2012-05-01

    There is growing evidence that a change in reward magnitude or value alters interval timing, indicating that motivation and timing are not independent processes as was previously believed. The present paper reviews several recent studies, as well as presenting some new evidence with further manipulations of reward value during training vs. testing on a peak procedure. The combined results cannot be accounted for by any of the current psychological timing theories. However, in examining the neural circuitry of the reward system, it is not surprising that motivation has an impact on timing because the motivation/valuation system directly interfaces with the timing system. A new approach is proposed for the development of the next generation of timing models, which utilizes knowledge of the neuroanatomy and neurophysiology of the reward system to guide the development of a neurocomputational model of the reward system. The initial foundation along with heuristics for proceeding with developing such a model is unveiled in an attempt to stimulate new theoretical approaches in the field. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Motivation and timing: Clues for modeling the reward system

    Science.gov (United States)

    Galtress, Tiffany; Marshall, Andrew T.; Kirkpatrick, Kimberly

    2012-01-01

    There is growing evidence that a change in reward magnitude or value alters interval timing, indicating that motivation and timing are not independent processes as was previously believed. The present paper reviews several recent studies, as well as presenting some new evidence with further manipulations of reward value during training vs. testing on a peak procedure. The combined results cannot be accounted for by any of the current psychological timing theories. However, in examining the neural circuitry of the reward system, it is not surprising that motivation has an impact on timing because the motivation/valuation system directly interfaces with the timing system. A new approach is proposed for the development of the next generation of timing models, which utilizes knowledge of the neuroanatomy and neurophysiology of the reward system to guide the development of a neurocomputational model of the reward system. The initial foundation along with heuristics for proceeding with developing such a model is unveiled in an attempt to stimulate new theoretical approaches in the field. PMID:22421220

  18. Real-time model for simulating a tracked vehicle on deformable soils

    Directory of Open Access Journals (Sweden)

    Martin Meywerk

    2016-05-01

    Full Text Available Simulation is one possibility to gain insight into the behaviour of tracked vehicles on deformable soils. A lot of publications are known on this topic, but most of the simulations described there cannot be run in real-time. The ability to run a simulation in real-time is necessary for driving simulators. This article describes an approach for real-time simulation of a tracked vehicle on deformable soils. The components of the real-time model are as follows: a conventional wheeled vehicle simulated in the Multi Body System software TRUCKSim, a geometric description of landscape, a track model and an interaction model between track and deformable soils based on Bekker theory and Janosi–Hanamoto, on one hand, and between track and vehicle wheels, on the other hand. Landscape, track model, soil model and the interaction are implemented in MATLAB/Simulink. The details of the real-time model are described in this article, and a detailed description of the Multi Body System part is omitted. Simulations with the real-time model are compared to measurements and to a detailed Multi Body System–finite element method model of a tracked vehicle. An application of the real-time model in a driving simulator is presented, in which 13 drivers assess the comfort of a passive and an active suspension of a tracked vehicle.

  19. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    Science.gov (United States)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  20. Short-Term Bus Passenger Demand Prediction Based on Time Series Model and Interactive Multiple Model Approach

    Directory of Open Access Journals (Sweden)

    Rui Xue

    2015-01-01

    Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.

  1. Three-factor models versus time series models: quantifying time-dependencies of interactions between stimuli in cell biology and psychobiology for short longitudinal data.

    Science.gov (United States)

    Frank, Till D; Kiyatkin, Anatoly; Cheong, Alex; Kholodenko, Boris N

    2017-06-01

    Signal integration determines cell fate on the cellular level, affects cognitive processes and affective responses on the behavioural level, and is likely to be involved in psychoneurobiological processes underlying mood disorders. Interactions between stimuli may subjected to time effects. Time-dependencies of interactions between stimuli typically lead to complex cell responses and complex responses on the behavioural level. We show that both three-factor models and time series models can be used to uncover such time-dependencies. However, we argue that for short longitudinal data the three factor modelling approach is more suitable. In order to illustrate both approaches, we re-analysed previously published short longitudinal data sets. We found that in human embryonic kidney 293 cells cells the interaction effect in the regulation of extracellular signal-regulated kinase (ERK) 1 signalling activation by insulin and epidermal growth factor is subjected to a time effect and dramatically decays at peak values of ERK activation. In contrast, we found that the interaction effect induced by hypoxia and tumour necrosis factor-alpha for the transcriptional activity of the human cyclo-oxygenase-2 promoter in HEK293 cells is time invariant at least in the first 12-h time window after stimulation. Furthermore, we applied the three-factor model to previously reported animal studies. In these studies, memory storage was found to be subjected to an interaction effect of the beta-adrenoceptor agonist clenbuterol and certain antagonists acting on the alpha-1-adrenoceptor / glucocorticoid-receptor system. Our model-based analysis suggests that only if the antagonist drug is administer in a critical time window, then the interaction effect is relevant. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  2. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    Science.gov (United States)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  3. Footprint (A Screening Model for Estimating the Area of a Plume Produced from Gasoline Containing Ethanol

    Science.gov (United States)

    FOOTPRINT is a simple and user-friendly screening model to estimate the length and surface area of BTEX plumes in ground water produced from a spill of gasoline that contains ethanol. Ethanol has a potential negative impact on the natural biodegradation of BTEX compounds in groun...

  4. Modelling Time-Varying Volatility in Financial Returns

    DEFF Research Database (Denmark)

    Amado, Cristina; Laakkonen, Helinä

    2014-01-01

    The “unusually uncertain” phase in the global financial markets has inspired many researchers to study the effects of ambiguity (or “Knightian uncertainty”) on the decisions made by investors and their implications for the capital markets. We contribute to this literature by using a modified...... version of the time-varying GARCH model of Amado and Teräsvirta (2013) to analyze whether the increasing uncertainty has caused excess volatility in the US and European government bond markets. In our model, volatility is multiplicatively decomposed into two time-varying conditional components: the first...... being captured by a stable GARCH(1,1) process and the second driven by the level of uncertainty in the financial market....

  5. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  6. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  7. Chronic ethanol exposure produces time- and brain region-dependent changes in gene coexpression networks.

    Directory of Open Access Journals (Sweden)

    Elizabeth A Osterndorff-Kahanek

    Full Text Available Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY, nucleus accumbens (NAC, prefrontal cortex (PFC, and liver after four weekly cycles of chronic intermittent ethanol (CIE vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes (2,000-3,000 at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600. Within each region, there was little gene overlap across time (~20%. All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global 'rewiring' of coexpression systems involving glial and immune signaling as well as neuronal genes.

  8. Neutrino flavor instabilities in a time-dependent supernova model

    Directory of Open Access Journals (Sweden)

    Sajad Abbar

    2015-12-01

    Full Text Available A dense neutrino medium such as that inside a core-collapse supernova can experience collective flavor conversion or oscillations because of the neutral-current weak interaction among the neutrinos. This phenomenon has been studied in a restricted, stationary supernova model which possesses the (spatial spherical symmetry about the center of the supernova and the (directional axial symmetry around the radial direction. Recently it has been shown that these spatial and directional symmetries can be broken spontaneously by collective neutrino oscillations. In this letter we analyze the neutrino flavor instabilities in a time-dependent supernova model. Our results show that collective neutrino oscillations start at approximately the same radius in both the stationary and time-dependent supernova models unless there exist very rapid variations in local physical conditions on timescales of a few microseconds or shorter. Our results also suggest that collective neutrino oscillations can vary rapidly with time in the regimes where they do occur which need to be studied in time-dependent supernova models.

  9. Acceleration (Deceleration Model Supporting Time Delays to Refresh Data

    Directory of Open Access Journals (Sweden)

    José Gerardo Carrillo González

    2018-04-01

    Full Text Available This paper proposes a mathematical model to regulate the acceleration (deceleration applied by self-driving vehicles in car-following situations. A virtual environment is designed to test the model in different circumstances: (1 the followers decelerate in time if the leader decelerates, considering a time delay of up to 5 s to refresh data (vehicles position coordinates required by the model, (2 with the intention of optimizing space, the vehicles are grouped in platoons, where 3 s of time delay (to update data is supported if the vehicles have a centre-to-centre spacing of 20 m and a time delay of 1 s is supported at a spacing of 6 m (considering a maximum speed of 20 m/s in both cases, and (3 an algorithm is presented to manage the vehicles’ priority at a traffic intersection, where the model regulates the vehicles’ acceleration (deceleration and a balance in the number of vehicles passing from each side is achieved.

  10. Random walk-percolation-based modeling of two-phase flow in porous media: Breakthrough time and net to gross ratio estimation

    Science.gov (United States)

    Ganjeh-Ghazvini, Mostafa; Masihi, Mohsen; Ghaedi, Mojtaba

    2014-07-01

    Fluid flow modeling in porous media has many applications in waste treatment, hydrology and petroleum engineering. In any geological model, flow behavior is controlled by multiple properties. These properties must be known in advance of common flow simulations. When uncertainties are present, deterministic modeling often produces poor results. Percolation and Random Walk (RW) methods have recently been used in flow modeling. Their stochastic basis is useful in dealing with uncertainty problems. They are also useful in finding the relationship between porous media descriptions and flow behavior. This paper employs a simple methodology based on random walk and percolation techniques. The method is applied to a well-defined model reservoir in which the breakthrough time distributions are estimated. The results of this method and the conventional simulation are then compared. The effect of the net to gross ratio on the breakthrough time distribution is studied in terms of Shannon entropy. Use of the entropy plot allows one to assign the appropriate net to gross ratio to any porous medium.

  11. Timing Interactions in Social Simulations: The Voter Model

    Science.gov (United States)

    Fernández-Gracia, Juan; Eguíluz, Víctor M.; Miguel, Maxi San

    The recent availability of huge high resolution datasets on human activities has revealed the heavy-tailed nature of the interevent time distributions. In social simulations of interacting agents the standard approach has been to use Poisson processes to update the state of the agents, which gives rise to very homogeneous activity patterns with a well defined characteristic interevent time. As a paradigmatic opinion model we investigate the voter model and review the standard update rules and propose two new update rules which are able to account for heterogeneous activity patterns. For the new update rules each node gets updated with a probability that depends on the time since the last event of the node, where an event can be an update attempt (exogenous update) or a change of state (endogenous update). We find that both update rules can give rise to power law interevent time distributions, although the endogenous one more robustly. Apart from that for the exogenous update rule and the standard update rules the voter model does not reach consensus in the infinite size limit, while for the endogenous update there exist a coarsening process that drives the system toward consensus configurations.

  12. Research on Coordination of Fresh Produce Supply Chain in Big Market Sales Environment

    Science.gov (United States)

    Su, Juning; Liu, Chenguang

    2014-01-01

    In this paper, we propose two decision models for decentralized and centralized fresh produce supply chains with stochastic supply and demand and controllable transportation time. The optimal order quantity and the optimal transportation time in these two supply chain systems are derived. To improve profits in a decentralized supply chain, based on analyzing the risk taken by each participant in the supply chain, we design a set of contracts which can coordinate this type of fresh produce supply chain with stochastic supply and stochastic demand, and controllable transportation time as well. We also obtain a value range of contract parameters that can increase profits of all participants in the decentralized supply chain. The expected profits of the decentralized setting and the centralized setting are compared with respect to given numerical examples. Furthermore, the sensitivity analyses of the deterioration rate factor and the freshness factor are performed. The results of numerical examples show that the transportation time is shorter, the order quantity is smaller, the total profit of whole supply chain is less, and the possibility of cooperation between supplier and retailer is higher for the fresh produce which is more perishable and its quality decays more quickly. PMID:24764770

  13. Research on coordination of fresh produce supply chain in big market sales environment.

    Science.gov (United States)

    Su, Juning; Wu, Jiebing; Liu, Chenguang

    2014-01-01

    In this paper, we propose two decision models for decentralized and centralized fresh produce supply chains with stochastic supply and demand and controllable transportation time. The optimal order quantity and the optimal transportation time in these two supply chain systems are derived. To improve profits in a decentralized supply chain, based on analyzing the risk taken by each participant in the supply chain, we design a set of contracts which can coordinate this type of fresh produce supply chain with stochastic supply and stochastic demand, and controllable transportation time as well. We also obtain a value range of contract parameters that can increase profits of all participants in the decentralized supply chain. The expected profits of the decentralized setting and the centralized setting are compared with respect to given numerical examples. Furthermore, the sensitivity analyses of the deterioration rate factor and the freshness factor are performed. The results of numerical examples show that the transportation time is shorter, the order quantity is smaller, the total profit of whole supply chain is less, and the possibility of cooperation between supplier and retailer is higher for the fresh produce which is more perishable and its quality decays more quickly.

  14. Dynamics of quality as a strategic variable in complex food supply chain network competition: The case of fresh produce

    Science.gov (United States)

    Nagurney, Anna; Besik, Deniz; Yu, Min

    2018-04-01

    In this paper, we construct a competitive food supply chain network model in which the profit-maximizing producers decide not only as to the volume of fresh produce produced and distributed using various supply chain network pathways, but they also decide, with the associated costs, on the initial quality of the fresh produce. Consumers, in turn, respond to the various producers' product outputs through the prices that they are willing to pay, given also the average quality associated with each producer or brand at the retail outlets. The quality of the fresh produce is captured through explicit formulae that incorporate time, temperature, and other link characteristics with links associated with processing, shipment, storage, etc. Capacities on links are also incorporated as well as upper bounds on the initial product quality of the firms at their production/harvesting sites. The governing concept of the competitive supply chain network model is that of Nash Equilibrium, for which alternative variational inequality formulations are derived, along with existence results. An algorithmic procedure, which can be interpreted as a discrete-time tatonnement process, is then described and applied to compute the equilibrium produce flow patterns and accompanying link Lagrange multipliers in a realistic case study, focusing on peaches, which includes disruptions.

  15. Applications For Real Time NOMADS At NCEP To Disseminate NOAA's Operational Model Data Base

    Science.gov (United States)

    Alpert, J. C.; Wang, J.; Rutledge, G.

    2007-05-01

    A wide range of environmental information, in digital form, with metadata descriptions and supporting infrastructure is contained in the NOAA Operational Modeling Archive Distribution System (NOMADS) and its Real Time (RT) project prototype at the National Centers for Environmental Prediction (NCEP). NOMADS is now delivering on its goal of a seamless framework, from archival to real time data dissemination for NOAA's operational model data holdings. A process is under way to make NOMADS part of NCEP's operational production of products. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development. In the National Research Council's "Completing the Forecast", Recommendation 3.4 states: "NOMADS should be maintained and extended to include (a) long-term archives of the global and regional ensemble forecasting systems at their native resolution, and (b) re-forecast datasets to facilitate post-processing." As one of many participants of NOMADS, NCEP serves the operational model data base using data application protocol (Open-DAP) and other services for participants to serve their data sets and users to obtain them. Using the NCEP global ensemble data as an example, we show an Open-DAP (also known as DODS) client application that provides a request-and-fulfill mechanism for access to the complex ensemble matrix of holdings. As an example of the DAP service, we show a client application which accesses the Global or Regional Ensemble data set to produce user selected weather element event probabilities. The event probabilities are easily extended over model forecast time to show probability histograms defining the future trend of user selected events. This approach insures an efficient use of computer resources because users transmit only the data necessary for their tasks. Data sets are served by OPeN-DAP allowing commercial clients such as MATLAB or IDL as well as freeware clients

  16. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    Science.gov (United States)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  17. Flipped SU(5) times U(1) in superconformal models

    Energy Technology Data Exchange (ETDEWEB)

    Bailin, D.; Katechou, E.K. (Sussex Univ., Brighton (United Kingdom). School of Mathematical and Physical Sciences); Love, A. (London Univ. (United Kingdom))

    1992-01-10

    This paper reports that flipped SU(5) {times} U(1) models are constructed in the framework of tensoring of N = 2 superconformal minimal models quotiented by discrete symmetries. Spontaneous breaking of flipped SU(5) {times} U(1) and extra U(1) factors in the gauge group along F-flat directions of the effective potential is studied.

  18. Space, time, and the third dimension (model error)

    Science.gov (United States)

    Moss, Marshall E.

    1979-01-01

    The space-time tradeoff of hydrologic data collection (the ability to substitute spatial coverage for temporal extension of records or vice versa) is controlled jointly by the statistical properties of the phenomena that are being measured and by the model that is used to meld the information sources. The control exerted on the space-time tradeoff by the model and its accompanying errors has seldom been studied explicitly. The technique, known as Network Analyses for Regional Information (NARI), permits such a study of the regional regression model that is used to relate streamflow parameters to the physical and climatic characteristics of the drainage basin.The NARI technique shows that model improvement is a viable and sometimes necessary means of improving regional data collection systems. Model improvement provides an immediate increase in the accuracy of regional parameter estimation and also increases the information potential of future data collection. Model improvement, which can only be measured in a statistical sense, cannot be quantitatively estimated prior to its achievement; thus an attempt to upgrade a particular model entails a certain degree of risk on the part of the hydrologist.

  19. Precipitation of metals in produced water : influence on contaminant transport and toxicity

    International Nuclear Information System (INIS)

    Azetsu-Scott, K.; Wohlgeschaffen, G.; Yeats, P.; Dalziel, J.; Niven, S.; Lee, K.

    2006-01-01

    Produced water contains a number of compounds of environmental concern and is the largest volume waste stream from oil and gas production activities. Recent studies have shown that chemicals dissolved in waste water from oil platforms stunted the growth of North Sea cod and affected their breeding patterns. Scientific research is needed to identify the impact of produced water discharges on the environment as well as to identify acceptable disposal limits for produced water. This presentation provided details of a study to characterize produced water discharged within the Atlantic regions of Canada. The study included dose response biological effect studies; research on processes controlling the transport and transformation of contaminants associated with produced water discharges and the development of risk assessment models. The sample location for the study was a site near Sable Island off the coast of Nova Scotia. Chemical analysis of the produced water was conducted as well as toxicity tests. Other tests included a time-series particulate matter sedimentation test; time-series metal and toxicity analysis; time-series change in metal precipitates tests and a produced water/seawater layering experiment. Dissolved and particulate fractions were presented, and the relationship between toxicity and particulate concentrations was examined. Results of the study suggested that produced water contaminants are variable over spatial and temporal scales due to source variations and changes in discharge rates. Chemical changes occur within 24 hours of produced water being mixed with seawater and facilitate contaminant partitioning between the surface micro layer, water column and sediments. Changes in the toxicity of the produced water are correlated with the partitioning of chemical components. The impact zone may be influenced by chemical kinetics that control the distribution of potential toxic metals. Further research is needed to investigate the effects of low level

  20. Time series modeling for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Mandl Kenneth D

    2003-01-01

    Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool

  1. Time Series Modelling using Proc Varmax

    DEFF Research Database (Denmark)

    Milhøj, Anders

    2007-01-01

    In this paper it will be demonstrated how various time series problems could be met using Proc Varmax. The procedure is rather new and hence new features like cointegration, testing for Granger causality are included, but it also means that more traditional ARIMA modelling as outlined by Box...

  2. Time-symmetric universe model and its observational implication

    Energy Technology Data Exchange (ETDEWEB)

    Futamase, T.; Matsuda, T.

    1987-08-01

    A time-symmetric closed-universe model is discussed in terms of the radiation arrow of time. The time symmetry requires the occurrence of advanced waves in the recontracting phase of the Universe. We consider the observational consequences of such advanced waves, and it is shown that a test observer in the expanding phase can observe a time-reversed image of a source of radiation in the future recontracting phase.

  3. Simulation of Flash-Flood-Producing Storm Events in Saudi Arabia Using the Weather Research and Forecasting Model

    KAUST Repository

    Deng, Liping; McCabe, Matthew; Stenchikov, Georgiy L.; Evans, Jason P.; Kucera, Paul A.

    2015-01-01

    The challenges of monitoring and forecasting flash-flood-producing storm events in data-sparse and arid regions are explored using the Weather Research and Forecasting (WRF) Model (version 3.5) in conjunction with a range of available satellite

  4. Model-checking dense-time Duration Calculus

    DEFF Research Database (Denmark)

    Fränzle, Martin

    2004-01-01

    Since the seminal work of Zhou Chaochen, M. R. Hansen, and P. Sestoft on decidability of dense-time Duration Calculus [Zhou, Hansen, Sestoft, 1993] it is well-known that decidable fragments of Duration Calculus can only be obtained through withdrawal of much of the interesting vocabulary...... of this logic. While this was formerly taken as an indication that key-press verification of implementations with respect to elaborate Duration Calculus specifications were also impossible, we show that the model property is well decidable for realistic designs which feature natural constraints...... suitably sparser model classes we obtain model-checking procedures for rich subsets of Duration Calculus. Together with undecidability results also obtained, this sheds light upon the exact borderline between decidability and undecidability of Duration Calculi and related logics....

  5. Snyder-de Sitter model from two-time physics

    International Nuclear Information System (INIS)

    Carrisi, M. C.; Mignemi, S.

    2010-01-01

    We show that the symplectic structure of the Snyder model on a de Sitter background can be derived from two-time physics in seven dimensions and propose a Hamiltonian for a free particle consistent with the symmetries of the model.

  6. Predicting the "graduate on time (GOT)" of PhD students using binary logistics regression model

    Science.gov (United States)

    Shariff, S. Sarifah Radiah; Rodzi, Nur Atiqah Mohd; Rahman, Kahartini Abdul; Zahari, Siti Meriam; Deni, Sayang Mohd

    2016-10-01

    Malaysian government has recently set a new goal to produce 60,000 Malaysian PhD holders by the year 2023. As a Malaysia's largest institution of higher learning in terms of size and population which offers more than 500 academic programmes in a conducive and vibrant environment, UiTM has taken several initiatives to fill up the gap. Strategies to increase the numbers of graduates with PhD are a process that is challenging. In many occasions, many have already identified that the struggle to get into the target set is even more daunting, and that implementation is far too ideal. This has further being progressing slowly as the attrition rate increases. This study aims to apply the proposed models that incorporates several factors in predicting the number PhD students that will complete their PhD studies on time. Binary Logistic Regression model is proposed and used on the set of data to determine the number. The results show that only 6.8% of the 2014 PhD students are predicted to graduate on time and the results are compared wih the actual number for validation purpose.

  7. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  8. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  9. Discrete-time semi-Markov modeling of human papillomavirus persistence

    Science.gov (United States)

    Mitchell, C. E.; Hudgens, M. G.; King, C. C.; Cu-Uvin, S.; Lo, Y.; Rompalo, A.; Sobel, J.; Smith, J. S.

    2011-01-01

    Multi-state modeling is often employed to describe the progression of a disease process. In epidemiological studies of certain diseases, the disease state is typically only observed at periodic clinical visits, producing incomplete longitudinal data. In this paper we consider fitting semi-Markov models to estimate the persistence of human papillomavirus (HPV) type-specific infection in studies where the status of HPV type(s) is assessed periodically. Simulation study results are presented indicating the semi-Markov estimator is more accurate than an estimator currently used in the HPV literature. The methods are illustrated using data from the HIV Epidemiology Research Study (HERS). PMID:21538985

  10. Modelling time-dependent mechanical behaviour of softwood using deformation kinetics

    DEFF Research Database (Denmark)

    Engelund, Emil Tang; Svensson, Staffan

    2010-01-01

    The time-dependent mechanical behaviour (TDMB) of softwood is relevant, e.g., when wood is used as building material where the mechanical properties must be predicted for decades ahead. The established mathematical models should be able to predict the time-dependent behaviour. However, these models...... are not always based on the actual physical processes causing time-dependent behaviour and the physical interpretation of their input parameters is difficult. The present study describes the TDMB of a softwood tissue and its individual tracheids. A model is constructed with a local coordinate system that follows...... macroscopic viscoelasticity, i.e., the time-dependent processes are to a significant degree reversible....

  11. Space-time scenarios of wind power generation produced using a Gaussian copula with parametrized precision matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tastu, J.; Pinson, P.; Madsen, Henrik

    2013-09-01

    The emphasis in this work is placed on generating space-time trajectories (also referred to as scenarios) of wind power generation. This calls for prediction of multivariate densities describing wind power generation at a number of distributed locations and for a number of successive lead times. A modelling approach taking advantage of sparsity of precision matrices is introduced for the description of the underlying space-time dependence structure. The proposed parametrization of the dependence structure accounts for such important process characteristics as non-constant conditional precisions and direction-dependent cross-correlations. Accounting for the space-time effects is shown to be crucial for generating high quality scenarios. (Author)

  12. Ecological monitoring in a discrete-time prey-predator model.

    Science.gov (United States)

    Gámez, M; López, I; Rodríguez, C; Varga, Z; Garay, J

    2017-09-21

    The paper is aimed at the methodological development of ecological monitoring in discrete-time dynamic models. In earlier papers, in the framework of continuous-time models, we have shown how a systems-theoretical methodology can be applied to the monitoring of the state process of a system of interacting populations, also estimating certain abiotic environmental changes such as pollution, climatic or seasonal changes. In practice, however, there may be good reasons to use discrete-time models. (For instance, there may be discrete cycles in the development of the populations, or observations can be made only at discrete time steps.) Therefore the present paper is devoted to the development of the monitoring methodology in the framework of discrete-time models of population ecology. By monitoring we mean that, observing only certain component(s) of the system, we reconstruct the whole state process. This may be necessary, e.g., when in a complex ecosystem the observation of the densities of certain species is impossible, or too expensive. For the first presentation of the offered methodology, we have chosen a discrete-time version of the classical Lotka-Volterra prey-predator model. This is a minimal but not trivial system where the methodology can still be presented. We also show how this methodology can be applied to estimate the effect of an abiotic environmental change, using a component of the population system as an environmental indicator. Although this approach is illustrated in a simplest possible case, it can be easily extended to larger ecosystems with several interacting populations and different types of abiotic environmental effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. On modeling panels of time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractThis paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a

  14. On the Designing of Model Checkers for Real-Time Distributed Systems

    Directory of Open Access Journals (Sweden)

    D. Yu. Volkanov

    2012-01-01

    Full Text Available To verify real-time properties of UML statecharts one may apply a UPPAAL, toolbox for model checking of real-time systems. One of the most suitable ways to specify an operational semantics of UML statecharts is to invoke the formal model of Hierarchical Timed Automata. Since the model language of UPPAAL is based on Networks of Timed Automata one has to provide a conversion of Hierarchical Timed Automata to Networks of Timed Automata. In this paper we describe this conversion algorithm and prove that it is correct w.r.t. UPPAAL query language which is based on the subset of Timed CTL.

  15. Joint Models of Longitudinal and Time-to-Event Data with More Than One Event Time Outcome: A Review.

    Science.gov (United States)

    Hickey, Graeme L; Philipson, Pete; Jorgensen, Andrea; Kolamunnage-Dona, Ruwanthi

    2018-01-31

    Methodological development and clinical application of joint models of longitudinal and time-to-event outcomes have grown substantially over the past two decades. However, much of this research has concentrated on a single longitudinal outcome and a single event time outcome. In clinical and public health research, patients who are followed up over time may often experience multiple, recurrent, or a succession of clinical events. Models that utilise such multivariate event time outcomes are quite valuable in clinical decision-making. We comprehensively review the literature for implementation of joint models involving more than a single event time per subject. We consider the distributional and modelling assumptions, including the association structure, estimation approaches, software implementations, and clinical applications. Research into this area is proving highly promising, but to-date remains in its infancy.

  16. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  17. A Data Model for Determining Weather's Impact on Travel Time

    DEFF Research Database (Denmark)

    Andersen, Ove; Torp, Kristian

    2016-01-01

    Accurate estimating travel times in road networks is a complex task because travel times depends on factors such as the weather. In this paper, we present a generic model for integrating weather data with GPS data to improve the accuracy of the estimated travel times. First, we present a data model...... for storing and map-matching GPS data, and integrating this data with detailed weather data. The model is generic in the sense that it can be used anywhere GPS data and weather data is available. Next, we analyze the correlation between travel time and the weather classes dry, fog, rain, and snow along...... with winds impact on travel time. Using a data set of 1.6 billion GPS records collected from 10,560 vehicles, over a 5 year period from all of Denmark, we show that snow can increase the travel time up to 27% and strong headwind can increase the travel time with up to 19% (compared to dry calm weather...

  18. Evaluation of a real-time PCR assay for rectal screening of OXA-48-producing Enterobacteriaceae in a general intensive care unit of an endemic hospital.

    Science.gov (United States)

    Fernández, J; Cunningham, S A; Fernández-Verdugo, A; Viña-Soria, L; Martín, L; Rodicio, M R; Escudero, D; Vazquez, F; Mandrekar, J N; Patel, R

    2017-07-01

    Carbapenemase-producing Enterobacteriaceae are increasing worldwide. Rectal screening for these bacteria can inform the management of infected and colonized patients, especially those admitted to intensive care units (ICUs). A laboratory developed, qualitative duplex real-time polymerase chain reaction assay for rapid detection of OXA-48-like and VIM producing Enterobacteriaceae, performed on rectal swabs, was designed and evaluated in an intensive care unit with endemic presence of OXA-48. During analytical assay validation, no cross-reactivity was observed and 100% sensitivity and specificity were obtained for both bla OXA-48-like and bla VIM in all spiked clinical samples. During the clinical part of the study, the global sensitivity and specificity of the real-time PCR assay for OXA-48 detection were 95.7% and 100% (P=0.1250), respectively, in comparison with culture; no VIM-producing Enterobacteriaceae were detected. Clinical features of patients in the ICU who were colonized or infected with OXA-48 producing Enterobacteriaceae, including outcome, were analyzed. Most had severe underlying conditions, and had risk factors for colonization with carbapenemase-producing Enterobacteriaceae before or during ICU admission, such as receiving previous antimicrobial therapy, prior healthcare exposure (including long-term care), chronic disease, immunosuppression and/or the presence of an intravascular catheter and/or mechanical ventilation device. The described real-time PCR assay is fast (~2-3hours, if DNA extraction is included), simple to perform and results are easy to interpret, features which make it applicable in the routine of clinical microbiology laboratories. Implementation in endemic hospitals could contribute to early detection of patients colonized by OXA-48 producing Enterobacteriaceae and prevention of their spread. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Space-time modeling of timber prices

    Science.gov (United States)

    Mo Zhou; Joseph Buongriorno

    2006-01-01

    A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...

  20. Modelling the time-variant dietary exposure of PCBs in China over the period 1930 to 2100.

    Science.gov (United States)

    Zhao, Shizhen; Breivik, Knut; Jones, Kevin C; Sweetman, Andrew J

    2018-06-06

    This study aimed for the first time to reconstruct historical exposure profiles for PCBs to the Chinese population, by examining the combined effect of changing temporal emissions and dietary transition. A long-term (1930-2100) dynamic simulation of human exposure using realistic emission scenarios, including primary emissions, unintentional emissions and emissions from e-waste, combined with dietary transition trends was conducted by a multimedia fate model (BETR-Global) linked to a bioaccumulation model (ACC-HUMAN). The model predicted an approximate 30-year delay of peak body burden for PCB-153 in a 30-year-old Chinese female, compared to their European counterpart. This was mainly attributed to a combination of change in diet and divergent emission patterns in China. A fish-based diet was predicted to result in up to 8 times higher body burden than a vegetable-based diet (2010-2100). During the production period, a worst-case scenario assuming only consumption of imported food from a region with more extensive production and usage of PCBs would result in up to 4 times higher body burden compared to consumption of only locally produced food. However, such differences gradually diminished after cessation of production. Therefore, emission reductions in China alone may not be sufficient to protect human health for PCB-like chemicals, particularly during the period of mass production. The results from this study illustrate that human exposure is also likely to be dictated by inflows of PCBs via the environment, waste and food.

  1. MODELING OF POWER SYSTEMS AND TESTING OF RELAY PROTECTION DEVICES IN REAL AND MODEL TIME

    Directory of Open Access Journals (Sweden)

    I. V. Novash

    2017-01-01

    Full Text Available The methods of modelling of power system modes and of testing of relay protection devices with the aid the simulation complexes in real time and with the help of computer software systems that enables the simulation of virtual time scale are considered. Information input protection signals in the simulation of the virtual model time are being obtained in the computational experiment, whereas the tests of protective devices are carried out with the help of hardware and software test systems with the use of estimated input signals. Study of power system stability when modes of generating and consuming electrical equipment and conditions of devices of relay protection are being changed requires testing with the use of digital simulators in a mode of a closed loop. Herewith feedbacks between a model of the power system operating in a real time and external devices or their models must be determined (modelled. Modelling in real time and the analysis of international experience in the use of digital simulation power systems for real-time simulation (RTDS simulator have been fulfilled. Examples are given of the use of RTDS systems by foreign energy companies to test relay protection systems and control, to test the equipment and devices of automatic control, analysis of cyber security and evaluation of the operation of energy systems under different scenarios of occurrence of emergency situations. Some quantitative data on the distribution of RTDS in different countries and Russia are presented. It is noted that the leading energy universities of Russia use the real-time simulation not only to solve scientific and technical problems, but also to conduct training and laboratory classes on modelling of electric networks and anti-emergency automatic equipment with the students. In order to check serviceability of devices of relay protection without taking into account the reaction of the power system tests can be performed in an open loop mode with the

  2. Modelling Conditional and Unconditional Heteroskedasticity with Smoothly Time-Varying Structure

    DEFF Research Database (Denmark)

    Amado, Christina; Teräsvirta, Timo

    multiplier type misspecification tests. Finite-sample properties of these procedures and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns illustrate the functioning and properties of our modelling strategy in practice......In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the conditional variance to have a smooth time-varying structure of either ad- ditive or multiplicative type. The suggested parameterizations describe both nonlinearity and structural change...... in the conditional and unconditional variances where the transition between regimes over time is smooth. A modelling strategy for these new time-varying parameter GARCH models is developed. It relies on a sequence of Lagrange multiplier tests, and the adequacy of the estimated models is investigated by Lagrange...

  3. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.

    Science.gov (United States)

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.

  4. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes

    Science.gov (United States)

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903

  5. A Circuit Model of Real Time Human Body Hydration.

    Science.gov (United States)

    Asogwa, Clement Ogugua; Teshome, Assefa K; Collins, Stephen F; Lai, Daniel T H

    2016-06-01

    Changes in human body hydration leading to excess fluid losses or overload affects the body fluid's ability to provide the necessary support for healthy living. We propose a time-dependent circuit model of real-time human body hydration, which models the human body tissue as a signal transmission medium. The circuit model predicts the attenuation of a propagating electrical signal. Hydration rates are modeled by a time constant τ, which characterizes the individual specific metabolic function of the body part measured. We define a surrogate human body anthropometric parameter θ by the muscle-fat ratio and comparing it with the body mass index (BMI), we find theoretically, the rate of hydration varying from 1.73 dB/min, for high θ and low τ to 0.05 dB/min for low θ and high τ. We compare these theoretical values with empirical measurements and show that real-time changes in human body hydration can be observed by measuring signal attenuation. We took empirical measurements using a vector network analyzer and obtained different hydration rates for various BMI, ranging from 0.6 dB/min for 22.7 [Formula: see text] down to 0.04 dB/min for 41.2 [Formula: see text]. We conclude that the galvanic coupling circuit model can predict changes in the volume of the body fluid, which are essential in diagnosing and monitoring treatment of body fluid disorder. Individuals with high BMI would have higher time-dependent biological characteristic, lower metabolic rate, and lower rate of hydration.

  6. Search for the standard model Higgs boson produced in association with a top-quark pair in pp collisions at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Erö, J.; Fabjan, C.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knünz, V.; Krammer, M.; Krätschmer, I.; Liko, D.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, C.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Taurok, A.; Treberer-treberspurg, W.; Waltenberger, W.; Wulz, C. -E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Bansal, M.; Bansal, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Knutsson, A.; Luyckx, S.; Mucibello, L.; Ochesanu, S.; Roland, B.; Rougny, R.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Blekman, F.; Blyweert, S.; D’Hondt, J.; Gonzalez Suarez, R.; Kalogeropoulos, A.; Keaveney, J.; Maes, M.; Olbrechts, A.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Villella, I.; Clerbaux, B.; De Lentdecker, G.; Dero, V.; Gay, A. P. R.; Hreus, T.; Léonard, A.; Marage, P. E.; Mohammadi, A.; Reis, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Adler, V.; Beernaert, K.; Benucci, L.; Cimmino, A.; Costantini, S.; Garcia, G.; Grunewald, M.; Klein, B.; Lellouch, J.; Marinov, A.; Mccartin, J.; Ocampo Rios, A. A.; Ryckbosch, D.; Sigamani, M.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Walsh, S.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Bruno, G.; Castello, R.; Ceard, L.; Delaere, C.; du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Lemaitre, V.; Liao, J.; Militaru, O.; Nuttens, C.; Pagano, D.; Pin, A.; Piotrzkowski, K.; Popov, A.; Selvaggi, M.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Alves, G. A.; Correa Martins, M.; Martins, T.; Pol, M. E.; Souza, M. H. G.; Aldá, W. L.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Malbouisson, H.; Malek, M.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Soares Jorge, L.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Anjos, T. S.; Bernardes, C. A.; Dias, F. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Lagana, C.; Marinho, F.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Genchev, V.; Iaydjiev, P.; Piperov, S.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Tcholakov, V.; Trayanov, R.; Vutova, M.; Dimitrov, A.; Hadjiiska, R.; Kozhuharov, V.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Jiang, C. H.; Liang, D.; Liang, S.; Meng, X.; Tao, J.; Wang, J.; Wang, X.; Wang, Z.; Xiao, H.; Xu, M.; Asawatangtrakuldee, C.; Ban, Y.; Guo, Y.; Li, Q.; Li, W.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Zhang, L.; Zou, W.; Avila, C.; Carrillo Montoya, C. A.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Plestina, R.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Duric, S.; Kadija, K.; Luetic, J.; Mekterovic, D.; Morovic, S.; Tikvica, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Finger, M.; Finger, M.; Assran, Y.; Ellithi Kamel, A.; Kuotb Awad, A. M.; Mahmoud, M. A.; Radi, A.; Kadastik, M.; Müntel, M.; Murumaa, M.; Raidal, M.; Rebane, L.; Tiko, A.; Eerola, P.; Fedi, G.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Kortelainen, M. J.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Korpela, A.; Tuuva, T.; Besancon, M.; Choudhury, S.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Millischer, L.; Nayak, A.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Benhabib, L.; Bianchini, L.; Bluj, M.; Busson, P.; Charlot, C.; Daci, N.; Dahms, T.; Dalchenko, M.; Dobrzynski, L.; Florent, A.; Granier de Cassagnac, R.; Haguenauer, M.; Miné, P.; Mironov, C.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Paganini, P.; Sabes, D.; Salerno, R.; Sirois, Y.; Veelken, C.; Zabi, A.; Agram, J. -L.; Andrea, J.; Bloch, D.; Bodin, D.; Brom, J. -M.; Chabert, E. C.; Collard, C.; Conte, E.; Drouhin, F.; Fontaine, J. -C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Juillot, P.; Le Bihan, A. -C.; Van Hove, P.; Beauceron, S.; Beaupere, N.; Bondu, O.; Boudoul, G.; Brochet, S.; Chasserat, J.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Sgandurra, L.; Sordini, V.; Tschudi, Y.; Vander Donckt, M.; Verdier, P.; Viret, S.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Calpas, B.; Edelhoff, M.; Feld, L.; Heracleous, N.; Hindrichs, O.; Jussen, R.; Klein, K.; Merz, J.; Ostapchuk, A.; Perieanu, A.; Raupach, F.; Sammet, J.; Schael, S.; Sprenger, D.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Caudron, J.; Dietz-Laursonn, E.; Duchardt, D.; Erdmann, M.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Olschewski, M.; Padeken, K.; Papacz, P.; Pieta, H.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Steggemann, J.; Teyssier, D.; Thüer, S.; Weber, M.; Bontenackels, M.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Lingemann, J.; Nowack, A.; Nugent, I. M.; Perchalla, L.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behr, J.; Behrenhoff, W.; Behrens, U.; Bergholz, M.; Bethani, A.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Costanza, F.; Dammann, D.; Diez Pardos, C.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Flucke, G.; Geiser, A.; Glushkov, I.; Gunnellini, P.; Habib, S.; Hauk, J.; Hellwig, G.; Jung, H.; Kasemann, M.; Katsas, P.; Kleinwort, C.; Kluge, H.; Krämer, M.; Krücker, D.; Kuznetsova, E.; Lange, W.; Leonard, J.; Lohmann, W.; Lutz, B.; Mankel, R.; Marfin, I.; Marienfeld, M.; Melzer-Pellmann, I. -A.; Meyer, A. B.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Novgorodova, O.; Nowak, F.; Olzem, J.; Perrey, H.; Petrukhin, A.; Pitzl, D.; Raspereza, A.; Ribeiro Cipriano, P. M.; Riedl, C.; Ron, E.; Rosin, M.; Salfeld-Nebgen, J.; Schmidt, R.; Schoerner-Sadenius, T.; Sen, N.; Stein, M.; Walsh, R.; Wissing, C.; Blobel, V.; Enderle, H.; Erfle, J.; Gebbert, U.; Görner, M.; Gosselink, M.; Haller, J.; Höing, R. S.; Kaschube, K.; Kaussen, G.; Kirschenmann, H.; Klanner, R.; Lange, J.; Peiffer, T.; Pietsch, N.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Schum, T.; Seidel, M.; Sibille, J.; Sola, V.; Stadie, H.; Steinbrück, G.; Thomsen, J.; Vanelderen, L.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Chwalek, T.; De Boer, W.; Descroix, A.; Dierlamm, A.; Feindt, M.; Guthoff, M.; Hackstein, C.; Hartmann, F.; Hauth, T.; Heinrich, M.; Held, H.; Hoffmann, K. H.; Husemann, U.; Katkov, I.; Komaragiri, J. R.; Kornmayer, A.; Lobelle Pardo, P.; Martschei, D.; Mueller, S.; Müller, Th.; Niegel, M.; Nürnberg, A.; Oberst, O.; Ott, J.; Quast, G.; Rabbertz, K.; Ratnikov, F.; Ratnikova, N.; Röcker, S.; Schilling, F. -P.; Schott, G.; Simonis, H. J.; Stober, F. M.; Troendle, D.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weiler, T.; Zeise, M.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Kesisoglou, S.; Kyriakis, A.; Loukas, D.; Markou, A.; Markou, C.; Ntomari, E.; Gouskos, L.; Mertzimekis, T. J.; Panagiotou, A.; Saoulidou, N.; Stiliaris, E.; Aslanoglou, X.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Radics, B.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Karancsi, J.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Beri, S. B.; Bhatnagar, V.; Dhingra, N.; Gupta, R.; Kaur, M.; Mehta, M. Z.; Mittal, M.; Nishu, N.; Saini, L. K.; Sharma, A.; Singh, J. B.; Kumar, Ashok; Kumar, Arun; Ahuja, S.; Bhardwaj, A.; Choudhary, B. C.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Saxena, P.; Sharma, V.; Shivpuri, R. K.; Banerjee, S.; Bhattacharya, S.; Chatterjee, K.; Dutta, S.; Gomber, B.; Jain, Sa.; Jain, Sh.; Khurana, R.; Modak, A.; Mukherjee, S.; Roy, D.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Dutta, D.; Kailas, S.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Chatterjee, R. M.; Ganguly, S.; Guchait, M.; Gurtu, A.; Maity, M.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Parida, B.; Sudhakar, K.; Wickramage, N.; Banerjee, S.; Dugad, S.; Arfaei, H.; Bakhshiansohi, H.; Etesami, S. M.; Fahim, A.; Hesari, H.; Jafari, A.; Khakzad, M.; Mohammadi Najafabadi, M.; Paktinat Mehdiabadi, S.; Safarzadeh, B.; Zeinali, M.; Abbrescia, M.; Barbone, L.; Calabria, C.; Chhibra, S. S.; Colaleo, A.; Creanza, D.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Marangelli, B.; My, S.; Nuzzo, S.; Pacifico, N.; Pompili, A.; Pugliese, G.; Selvaggi, G.; Silvestris, L.; Singh, G.; Venditti, R.; Verwilligen, P.; Zito, G.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Meneghelli, M.; Montanari, A.; Navarria, F. L.; Odorici, F.; Perrotta, A.; Primavera, F.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Albergo, S.; Chiorboli, M.; Costa, S.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D’Alessandro, R.; Focardi, E.; Frosali, S.; Gallo, E.; Gonzi, S.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Colafranceschi, S.; Fabbri, F.; Piccolo, D.; Fabbricatore, P.; Musenich, R.; Tosi, S.; Benaglia, A.; De Guio, F.; Di Matteo, L.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Lucchini, M. T.; Malvezzi, S.; Manzoni, R. A.; Martelli, A.; Massironi, A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; De Cosa, A.; Dogangun, O.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Biasotto, M.; Bisello, D.; Branca, A.; Carlin, R.; Checchia, P.; Dorigo, T.; Galanti, M.; Gasparini, F.; Gasparini, U.; Giubilato, P.; Gozzelino, A.; Kanishchev, K.; Lacaprara, S.; Lazzizzera, I.; Margoni, M.; Meneguzzo, A. T.; Passaseo, M.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Vanini, S.; Ventura, S.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Gabusi, M.; Ratti, S. P.; Riccardi, C.; Vitulo, P.; Biasini, M.; Bilei, G. M.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Nappi, A.; Romeo, F.; Saha, A.; Santocchia, A.; Spiezia, A.; Taroni, S.; Azzurri, P.; Bagliesi, G.; Boccali, T.; Broccolo, G.; Castaldi, R.; D’Agnolo, R. T.; Dell’Orso, R.; Fiori, F.; Foà, L.; Giassi, A.; Kraan, A.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Vernieri, C.; Barone, L.; Cavallari, F.; Del Re, D.; Diemoz, M.; Fanelli, C.; Grassi, M.; Longo, E.; Margaroli, F.; Meridiani, P.; Micheli, F.; Nourbakhsh, S.; Organtini, G.; Paramatti, R.; Rahatlou, S.; Soffi, L.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Biino, C.; Cartiglia, N.; Casasso, S.; Costa, M.; Demaria, N.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Musich, M.; Obertino, M. M.; Ortona, G.; Pastrone, N.; Pelliccioni, M.; Potenza, A.; Romero, A.; Sacchi, R.; Solano, A.; Staiano, A.; Tamponi, U.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; Marone, M.; Montanino, D.; Penzo, A.; Schizzi, A.; Zanetti, A.; Kim, T. Y.; Nam, S. K.; Chang, S.; Kim, D. H.; Kim, G. N.; Kim, J. E.; Kong, D. J.; Oh, Y. D.; Park, H.; Son, D. C.; Kim, J. Y.; Kim, Zero J.; Song, S.; Choi, S.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, T. J.; Lee, K. S.; Moon, D. H.; Park, S. K.; Roh, Y.; Choi, M.; Kim, J. H.; Park, C.; Park, I. C.; Park, S.; Ryu, G.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, M. S.; Kwon, E.; Lee, B.; Lee, J.; Lee, S.; Seo, H.; Yu, I.; Grigelionis, I.; Juodagalvis, A.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Lopez-Fernandez, R.; Martínez-Ortega, J.; Sanchez-Hernandez, A.; Villasenor-Cendejas, L. M.; Carrillo Moreno, S.; Vazquez Valencia, F.; Salazar Ibarguen, H. A.; Casimiro Linares, E.; Morelos Pineda, A.; Reyes-Santos, M. A.; Krofcheck, D.; Bell, A. J.; Butler, P. H.; Doesburg, R.; Reucroft, S.; Silverwood, H.; Ahmad, M.; Asghar, M. I.; Butt, J.; Hoorani, H. R.; Khalid, S.; Khan, W. A.; Khurshid, T.; Qazi, S.; Shah, M. A.; Shoaib, M.; Bialkowska, H.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Wrochna, G.; Zalewski, P.; Brona, G.; Bunkowski, K.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Wolszczak, W.; Almeida, N.; Bargassa, P.; David, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Seixas, J.; Varela, J.; Vischia, P.; Bunin, P.; Golutvin, I.; Gorbunov, I.; Karjavin, V.; Konoplyanikov, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Moisenz, P.; Palichik, V.; Perelygin, V.; Savina, M.; Shmatov, S.; Shulha, S.; Smirnov, V.; Volodko, A.; Zarubin, A.; Evstyukhin, S.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Vorobyev, An.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Matveev, V.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Erofeeva, M.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Safronov, G.; Semenov, S.; Spiridonov, A.; Stolin, V.; Vlasov, E.; Zhokin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Markina, A.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Ekmedzic, M.; Krpic, D.; Milosevic, J.; Aguilar-Benitez, M.; Alcaraz Maestre, J.; Battilana, C.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Ferrando, A.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Merino, G.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Santaolalla, J.; Soares, M. S.; Willmott, C.; Albajar, C.; de Trocóniz, J. F.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Lloret Iglesias, L.; Piedra Gomez, J.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Chuang, S. H.; Duarte Campderros, J.; Fernandez, M.; Gomez, G.; Gonzalez Sanchez, J.; Graziano, A.; Jorda, C.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Bendavid, J.; Benitez, J. F.; Bernet, C.; Bianchi, G.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Christiansen, T.; Coarasa Perez, J. A.; d’Enterria, D.; Dabrowski, A.; De Roeck, A.; De Visscher, S.; Di Guida, S.; Dobson, M.; Dupont-Sagorin, N.; Elliott-Peisert, A.; Eugster, J.; Funk, W.; Georgiou, G.; Giffels, M.; Gigi, D.; Gill, K.; Giordano, D.; Giunta, M.; Glege, F.; Gomez-Reino Garrido, R.; Govoni, P.; Gowdy, S.; Guida, R.; Hammer, J.; Hansen, M.; Harris, P.; Hartl, C.; Harvey, J.; Hegner, B.; Hinzmann, A.; Innocente, V.; Janot, P.; Kaadze, K.; Karavakis, E.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lee, Y. -J.; Lourenço, C.; Malberti, M.; Malgeri, L.; Mannelli, M.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moser, R.; Mulders, M.; Musella, P.; Nesvold, E.; Orsini, L.; Palencia Cortezon, E.; Perez, E.; Perrozzi, L.; Petrilli, A.; Pfeiffer, A.; Pierini, M.; Pimiä, M.; Piparo, D.; Polese, G.; Quertenmont, L.; Racz, A.; Reece, W.; Rodrigues Antunes, J.; Rolandi, G.; Rovelli, C.; Rovere, M.; Sakulin, H.; Santanastasio, F.; Schäfer, C.; Schwick, C.; Segoni, I.; Sekmen, S.; Sharma, A.; Siegrist, P.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Stoye, M.; Tsirou, A.; Veres, G. I.; Vlimant, J. R.; Wöhri, H. K.; Worm, S. D.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Gabathuler, K.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; König, S.; Kotlinski, D.; Langenegger, U.; Meier, F.; Renker, D.; Rohe, T.; Bachmair, F.; Bäni, L.; Bortignon, P.; Buchmann, M. A.; Casal, B.; Chanon, N.; Deisher, A.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dünser, M.; Eller, P.; Grab, C.; Hits, D.; Lecomte, P.; Lustermann, W.; Marini, A. C.; Martinez Ruiz del Arbol, P.; Mohr, N.; Moortgat, F.; Nägeli, C.; Nef, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pape, L.; Pauss, F.; Peruzzi, M.; Ronga, F. J.; Rossini, M.; Sala, L.; Sanchez, A. K.; Starodumov, A.; Stieger, B.; Takahashi, M.; Tauscher, L.; Thea, A.; Theofilatos, K.; Treille, D.; Urscheler, C.; Wallny, R.; Weber, H. A.; Amsler, C.; Chiochia, V.; Favaro, C.; Ivova Rikova, M.; Kilminster, B.; Millan Mejias, B.; Otiougova, P.; Robmann, P.; Snoek, H.; Tupputi, S.; Verzetti, M.; Cardaci, M.; Chen, K. H.; Ferro, C.; Kuo, C. M.; Li, S. W.; Lin, W.; Lu, Y. J.; Volpe, R.; Yu, S. S.; Bartalini, P.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Dietz, C.; Grundler, U.; Hou, W. -S.; Hsiung, Y.; Kao, K. Y.; Lei, Y. J.; Lu, R. -S.; Majumder, D.; Petrakou, E.; Shi, X.; Shiu, J. G.; Tzeng, Y. M.; Wang, M.; Asavapibhop, B.; Suwonjandee, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sogut, K.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, M.; Akin, I. V.; Aliev, T.; Bilin, B.; Bilmis, S.; Deniz, M.; Gamsizkan, H.; Guler, A. M.; Karapinar, G.; Ocalan, K.; Ozpineci, A.; Serin, M.; Sever, R.; Surat, U. E.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Isildak, B.; Kaya, M.; Kaya, O.; Ozkorucuklu, S.; Sonmez, N.; Bahtiyar, H.; Barlas, E.; Cankocak, K.; Günaydin, Y. O.; Vardarlı, F. I.; Yücel, M.; Levchuk, L.; Sorokin, P.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Frazier, R.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Kreczko, L.; Metson, S.; Newbold, D. M.; Nirunpong, K.; Poll, A.; Senkin, S.; Smith, V. J.; Williams, T.; Basso, L.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Jackson, J.; Olaiya, E.; Petyt, D.; Radburn-Smith, B. C.; Shepherd-Themistocleous, C. H.; Tomalin, I. R.; Womersley, W. J.; Bainbridge, R.; Ball, G.; Buchmuller, O.; Colling, D.; Cripps, N.; Cutajar, M.; Dauncey, P.; Davies, G.; Della Negra, M.; Ferguson, W.; Fulcher, J.; Gilbert, A.; Guneratne Bryer, A.; Hall, G.; Hatherell, Z.; Hays, J.; Iles, G.; Jarvis, M.; Karapostoli, G.; Kenzie, M.; Lyons, L.; Magnan, A. -M.; Marrouche, J.; Mathias, B.; Nandi, R.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Pioppi, M.; Raymond, D. M.; Rogerson, S.; Rose, A.; Seez, C.; Sharp, P.; Sparrow, A.; Tapper, A.; Vazquez Acosta, M.; Virdee, T.; Wakefield, S.; Wardle, N.; Whyntie, T.; Chadwick, M.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Martin, W.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Dittmann, J.; Hatakeyama, K.; Kasmi, A.; Liu, H.; Scarborough, T.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Heister, A.; Lawson, P.; Lazic, D.; Rohlf, J.; Sperka, D.; St. John, J.; Sulak, L.; Alimena, J.; Bhattacharya, S.; Christopher, G.; Cutts, D.; Demiragli, Z.; Ferapontov, A.; Garabedian, A.; Heintz, U.; Kukartsev, G.; Laird, E.; Landsberg, G.; Luk, M.; Narain, M.; Segala, M.; Sinthuprasith, T.; Speer, T.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Caulfield, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Dolen, J.; Erbacher, R.; Gardner, M.; Houtz, R.; Ko, W.; Kopecky, A.; Lander, R.; Mall, O.; Miceli, T.; Nelson, R.; Pellett, D.; Ricci-Tam, F.; Rutherford, B.; Searle, M.; Smith, J.; Squires, M.; Tripathi, M.; Yohay, R.; Andreev, V.; Cline, D.; Cousins, R.; Duris, J.; Erhan, S.; Everaerts, P.; Farrell, C.; Felcini, M.; Hauser, J.; Ignatenko, M.; Jarvis, C.; Rakness, G.; Schlein, P.; Traczyk, P.; Valuev, V.; Weber, M.; Babb, J.; Clare, R.; Dinardo, M. E.; Ellison, J.; Gary, J. W.; Giordano, F.; Hanson, G.; Liu, H.; Long, O. R.; Luthra, A.; Nguyen, H.; Paramesvaran, S.; Sturdy, J.; Sumowidagdo, S.; Wilken, R.; Wimpenny, S.; Andrews, W.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; Evans, D.; Holzner, A.; Kelley, R.; Lebourgeois, M.; Letts, J.; Macneill, I.; Mangano, B.; Padhi, S.; Palmer, C.; Petrucciani, G.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Sudano, E.; Tadel, M.; Tu, Y.; Vartak, A.; Wasserbaech, S.; Würthwein, F.; Yagil, A.; Yoo, J.; Barge, D.; Bellan, R.; Campagnari, C.; D’Alfonso, M.; Danielson, T.; Flowers, K.; Geffert, P.; George, C.; Golf, F.; Incandela, J.; Justus, C.; Kalavase, P.; Kovalskyi, D.; Krutelyov, V.; Lowette, S.; Magaña Villalba, R.; Mccoll, N.; Pavlunin, V.; Ribnik, J.; Richman, J.; Rossin, R.; Stuart, D.; To, W.; West, C.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Di Marco, E.; Duarte, J.; Kcira, D.; Ma, Y.; Mott, A.; Newman, H. B.; Rogan, C.; Spiropulu, M.; Timciuc, V.; Veverka, J.; Wilkinson, R.; Xie, S.; Yang, Y.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Carroll, R.; Ferguson, T.; Iiyama, Y.; Jang, D. W.; Liu, Y. F.; Paulini, M.; Russ, J.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Drell, B. R.; Ford, W. T.; Gaz, A.; Luiggi Lopez, E.; Nauenberg, U.; Smith, J. G.; Stenson, K.; Ulmer, K. A.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Eggert, N.; Gibbons, L. K.; Hopkins, W.; Khukhunaishvili, A.; Kreis, B.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Ryd, A.; Salvati, E.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Burkett, K.; Butler, J. N.; Chetluru, V.; Cheung, H. W. K.; Chle-bana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gao, Y.; Gottschalk, E.; Gray, L.; Green, D.; Gutsche, O.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kunori, S.; Kwan, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Martinez Outschoorn, V. I.; Maruyama, S.; Mason, D.; McBride, P.; Mishra, K.; Mrenna, S.; Musienko, Y.; Newman-Holmes, C.; O’Dell, V.; Prokofyev, O.; Sexton-Kennedy, E.; Sharma, S.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitmore, J.; Wu, W.; Yang, F.; Yun, J. C.; Acosta, D.; Avery, P.; Bourilkov, D.; Chen, M.; Cheng, T.; Das, S.; De Gruttola, M.; Di Giovanni, G. P.; Dobur, D.; Drozdetskiy, A.; Field, R. D.; Fisher, M.; Fu, Y.; Furic, I. K.; Hugon, J.; Kim, B.; Konigsberg, J.; Korytov, A.; Kropivnitskaya, A.; Kypreos, T.; Low, J. F.; Matchev, K.; Milenovic, P.; Mitselmakher, G.; Muniz, L.; Remington, R.; Rinkevicius, A.; Skhirtladze, N.; Snowball, M.; Yelton, J.; Zakaria, M.; Gaultney, V.; Hewamanage, S.; Lebolo, L. M.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, T.; Askew, A.; Bochenek, J.; Chen, J.; Diamond, B.; Gleyzer, S. V.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Dorney, B.; Hohlmann, M.; Kalakhety, H.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Bazterra, V. E.; Betts, R. R.; Bucinskaite, I.; Callner, J.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Khalatyan, S.; Kurt, P.; Lacroix, F.; O’Brien, C.; Silkworth, C.; Strom, D.; Turner, P.; Varelas, N.; Akgun, U.; Albayrak, E. A.; Bilki, B.; Clarida, W.; Dilsiz, K.; Duru, F.; Griffiths, S.; Merlo, J. -P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Newsom, C. R.; Ogul, H.; Onel, Y.; Ozok, F.; Sen, S.; Tan, P.; Tiras, E.; Wetzel, J.; Yetkin, T.; Yi, K.; Barnett, B. A.; Blumenfeld, B.; Bolognesi, S.; Fehling, D.; Giurgiu, G.; Gritsan, A. V.; Hu, G.; Maksimovic, P.; Swartz, M.; Whitbeck, A.; Baringer, P.; Bean, A.; Benelli, G.; Kenny, R. P.; Murray, M.; Noonan, D.; Sanders, S.; Stringer, R.; Wood, J. S.; Barfuss, A. F.; Chakaberia, I.; Ivanov, A.; Khalil, S.; Makouski, M.; Maravin, Y.; Shrestha, S.; Svintradze, I.; Gronberg, J.; Lange, D.; Rebassoo, F.; Wright, D.; Baden, A.; Calvert, B.; Eno, S. C.; Gomez, J. A.; Hadley, N. J.; Kellogg, R. G.; Kolberg, T.; Lu, Y.; Marionneau, M.; Mignerey, A. C.; Pedro, K.; Peterman, A.; Skuja, A.; Temple, J.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Bauer, G.; Busza, W.; Butz, E.; Cali, I. A.; Chan, M.; Dutta, V.; Gomez Ceballos, G.; Goncharov, M.; Kim, Y.; Klute, M.; Levin, A.; Luckey, P. D.; Ma, T.; Nahn, S.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Stephans, G. S. F.; Stöckli, F.; Sumorok, K.; Sung, K.; Velicanu, D.; Wolf, R.; Wyslouch, B.; Yang, M.; Yilmaz, Y.; Yoon, A. S.; Zanetti, M.; Zhukova, V.; Dahmes, B.; De Benedetti, A.; Franzoni, G.; Gude, A.; Haupt, J.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Mans, J.; Pastika, N.; Rusack, R.; Sasseville, M.; Singovsky, A.; Tambe, N.; Turkewitz, J.; Cremaldi, L. M.; Kroeger, R.; Perera, L.; Rahmat, R.; Sanders, D. A.; Summers, D.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Eads, M.; Keller, J.; Kravchenko, I.; Lazo-Flores, J.; Malik, S.; Snow, G. R.; Godshalk, A.; Iashvili, I.; Jain, S.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Wan, Z.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Haley, J.; Nash, D.; Orimoto, T.; Trocino, D.; Wood, D.; Zhang, J.; Anastassov, A.; Hahn, K. A.; Kubik, A.; Lusito, L.; Mucia, N.; Odell, N.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Velasco, M.; Won, S.; Berry, D.; Brinkerhoff, A.; Chan, K. M.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kolb, J.; Lannon, K.; Luo, W.; Lynch, S.; Marinelli, N.; Morse, D. M.; Pearson, T.; Planer, M.; Ruchti, R.; Slaunwhite, J.; Valls, N.; Wayne, M.; Wolf, M.; Antonelli, L.; Bylsma, B.; Durkin, L. S.; Hill, C.; Hughes, R.; Kotov, K.; Ling, T. Y.; Puigh, D.; Rodenburg, M.; Smith, G.; Timcheck, J.; Vuosalo, C.; Williams, G.; Winer, B. L.; Wolfe, H.; Berry, E.; Elmer, P.; Halyo, V.; Hebda, P.; Hegeman, J.; Hunt, A.; Jindal, P.; Koay, S. A.; Lopes Pegna, D.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Piroué, P.; Quan, X.; Raval, A.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zenz, S. C.; Zuranski, A.; Brownson, E.; Lopez, A.; Mendez, H.; Ramirez Vargas, J. E.; Alagoz, E.; Benedetti, D.; Bolla, G.; Bortoletto, D.; De Mattia, M.; Everett, A.; Hu, Z.; Jones, M.; Koybasi, O.; Kress, M.; Leonardo, N.; Maroussov, V.; Merkel, P.; Miller, D. H.; Neumeister, N.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Vidal Marono, M.; Yoo, H. D.; Zablocki, J.; Zheng, Y.; Guragain, S.; Parashar, N.; Adair, A.; Akgun, B.; Ecklund, K. M.; Geurts, F. J. M.; Li, W.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; Covarelli, R.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Miner, D. C.; Petrillo, G.; Vishnevskiy, D.; Zielinski, M.; Bhatti, A.; Ciesielski, R.; Demortier, L.; Goulianos, K.; Lungu, G.; Malik, S.; Mesropian, C.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Lath, A.; Panwalkar, S.; Park, M.; Patel, R.; Rekovic, V.; Robles, J.; Rose, K.; Salur, S.; Schnetzer, S.; Seitz, C.; Somalwar, S.; Stone, R.; Walker, M.; Cerizza, G.; Hollingsworth, M.; Spanier, S.; Yang, Z. C.; York, A.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Khotilovich, V.; Montalvo, R.; Osipenkov, I.; Pakhotin, Y.; Perloff, A.; Roe, J.; Safonov, A.; Sakuma, T.; Suarez, I.; Tatarinov, A.; Toback, D.; Akchurin, N.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Jeong, C.; Kovitanggoon, K.; Lee, S. W.; Libeiro, T.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Sharma, M.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Arenton, M. W.; Balazs, M.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Lin, C.; Neu, C.; Wood, J.; Gollapinni, S.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sakharov, A.; Anderson, M.; Belknap, D. A.; Borrello, L.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Friis, E.; Grogg, K. S.; Grothe, M.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Klukas, J.; Lanaro, A.; Lazaridis, C.; Loveless, R.; Mohapatra, A.; Mozer, M. U.; Ojalvo, I.; Pierro, G. A.; Ross, I.; Savin, A.; Smith, W. H.; Swanson, J.

    2013-05-28

    A search for the standard model Higgs boson produced in association with a top-quark pair is presented using data samples corresponding to an integrated luminosity of 5.0 fb-1 (5.1 fb-1) collected in pp collisions at the center-of-mass energy of 7 TeV (8 TeV). Events are considered where the top-quark pair decays to either one lepton+jets or dileptons , ℓ being an electron or a muon. The search is optimized for the decay mode . The largest background to the signal is top-quark pair production with additional jets. Artificial neural networks are used to discriminate between signal and background events. Combining the results from the 7 TeV and 8 TeV samples, the observed (expected) limit on the cross section for Higgs boson production in association with top-quark pairs for a Higgs boson mass of 125 GeV is 5.8 (5.2) times the standard model expectation.

  7. State–time spectrum of signal transduction logic models

    International Nuclear Information System (INIS)

    MacNamara, Aidan; Terfve, Camille; Henriques, David; Bernabé, Beatriz Peñalver; Saez-Rodriguez, Julio

    2012-01-01

    Despite the current wealth of high-throughput data, our understanding of signal transduction is still incomplete. Mathematical modeling can be a tool to gain an insight into such processes. Detailed biochemical modeling provides deep understanding, but does not scale well above relatively a few proteins. In contrast, logic modeling can be used where the biochemical knowledge of the system is sparse and, because it is parameter free (or, at most, uses relatively a few parameters), it scales well to large networks that can be derived by manual curation or retrieved from public databases. Here, we present an overview of logic modeling formalisms in the context of training logic models to data, and specifically the different approaches to modeling qualitative to quantitative data (state) and dynamics (time) of signal transduction. We use a toy model of signal transduction to illustrate how different logic formalisms (Boolean, fuzzy logic and differential equations) treat state and time. Different formalisms allow for different features of the data to be captured, at the cost of extra requirements in terms of computational power and data quality and quantity. Through this demonstration, the assumptions behind each formalism are discussed, as well as their advantages and disadvantages and possible future developments. (paper)

  8. Modeling time-to-event (survival) data using classification tree analysis.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  9. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    Science.gov (United States)

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may

  10. Establishment of a real-time PCR method for quantification of geosmin-producing Streptomyces spp. in recirculating aquaculture systems.

    Science.gov (United States)

    Auffret, Marc; Pilote, Alexandre; Proulx, Emilie; Proulx, Daniel; Vandenberg, Grant; Villemur, Richard

    2011-12-15

    Geosmin and 2-methylisoborneol (MIB) have been associated with off-flavour problems in fish and seafood products, generating a strong negative impact for aquaculture industries. Although most of the producers of geosmin and MIB have been identified as Streptomyces species or cyanobacteria, Streptomyces spp. are thought to be responsible for the synthesis of these compounds in indoor recirculating aquaculture systems (RAS). The detection of genes involved in the synthesis of geosmin and MIB can be a relevant indicator of the beginning of off-flavour events in RAS. Here, we report a real-time polymerase chain reaction (qPCR) protocol targeting geoA sequences that encode a germacradienol synthase involved in geosmin synthesis. New geoA-related sequences were retrieved from eleven geosmin-producing Actinomycete strains, among them two Streptomyces strains isolated from two RAS. Combined with geoA-related sequences available in gene databases, we designed primers and standards suitable for qPCR assays targeting mainly Streptomyces geoA. Using our qPCR protocol, we succeeded in measuring the level of geoA copies in sand filter and biofilters in two RAS. This study is the first to apply qPCR assays to detect and quantify the geosmin synthesis gene (geoA) in RAS. Quantification of geoA in RAS could permit the monitoring of the level of geosmin producers prior to the occurrence of geosmin production. This information will be most valuable for fish producers to manage further development of off-flavour events. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Dynamic Travel Time Prediction Models for Buses Using Only GPS Data

    Directory of Open Access Journals (Sweden)

    Wei Fan

    2015-01-01

    Full Text Available Providing real-time and accurate travel time information of transit vehicles can be very helpful as it assists passengers in planning their trips to minimize waiting times. The purpose of this research is to develop and compare dynamic travel time prediction models which can provide accurate prediction of bus travel time in order to give real-time information at a given downstream bus stop using only global positioning system (GPS data. Historical Average (HA, Kalman Filtering (KF and Artificial Neural Network (ANN models are considered and developed in this paper. A case has been studied by making use of the three models. Promising results are obtained from the case study, indicating that the models can be used to implement an Advanced Public Transport System. The implementation of this system could assist transit operators in improving the reliability of bus services, thus attracting more travelers to transit vehicles and helping relieve congestion. The performances of the three models were assessed and compared with each other under two criteria: overall prediction accuracy and robustness. It was shown that the ANN outperformed the other two models in both aspects. In conclusion, it is shown that bus travel time information can be reasonably provided using only arrival and departure time information at stops even in the absence of traffic-stream data.

  12. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    Science.gov (United States)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  13. A Box-Cox normal model for response times

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Linden, W.J. van der

    2009-01-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box–Cox transformations for response

  14. RTMOD: Real-Time MODel evaluation

    DEFF Research Database (Denmark)

    Graziani, G.; Galmarini, S.; Mikkelsen, Torben

    2000-01-01

    the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results....... At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... during the ETEX exercises suggested the development of this project. RTMOD featured a web-baseduser-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration...

  15. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  16. Model Checking Timed Automata with Priorities using DBM Subtraction

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Pettersson, Paul

    2006-01-01

    In this paper we describe an extension of timed automata with priorities, and efficient algorithms to compute subtraction on DBMs (difference bounded matrices), needed in symbolic model-checking of timed automata with priorities. The subtraction is one of the few operations on DBMs that result...... in a non-convex set needing sets of DBMs for representation. Our subtraction algorithms are efficient in the sense that the number of generated DBMs is significantly reduced compared to a naive algorithm. The overhead in time is compensated by the gain from reducing the number of resulting DBMs since...... this number affects the performance of symbolic model-checking. The uses of the DBM subtraction operation extend beyond timed automata with priorities. It is also useful for allowing guards on transitions with urgent actions, deadlock checking, and timed games....

  17. Rapid Prototyping — A Tool for Presenting 3-Dimensional Digital Models Produced by Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Juho-Pekka Virtanen

    2014-07-01

    Full Text Available Rapid prototyping has received considerable interest with the introduction of affordable rapid prototyping machines. These machines can be used to manufacture physical models from three-dimensional digital mesh models. In this paper, we compare the results obtained with a new, affordable, rapid prototyping machine, and a traditional professional machine. Two separate data sets are used for this, both of which were acquired using terrestrial laser scanning. Both of the machines were able to produce complex and highly detailed geometries in plastic material from models based on terrestrial laser scanning. The dimensional accuracies and detail levels of the machines were comparable, and the physical artifacts caused by the fused deposition modeling (FDM technique used in the rapid prototyping machines could be found in both models. The accuracy of terrestrial laser scanning exceeded the requirements for manufacturing physical models of large statues and building segments at a 1:40 scale.

  18. Modeling of wear behavior of Al/B{sub 4}C composites produced by powder metallurgy

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Ismail; Bektas, Asli [Gazi Univ., Ankara (Turkey). Dept. of Industrial Design Engineering; Guel, Ferhat; Cinci, Hanifi [Gazi Univ., Ankara (Turkey). Dept. of Materials and Metallurgy Engineering

    2017-06-01

    Wear characteristics of composites, Al matrix reinforced with B{sub 4}C particles percentages of 5, 10,15 and 20 produced by the powder metallurgy method were studied in this study. For this purpose, a mixture of Al and B{sub 4}C powders were pressed under 650 MPa pressure and then sintered at 635 C. The analysis of hardness, density and microstructure was performed. The produced samples were worn using a pin-on-disk abrasion device under 10, 20 and 30 N load through 500, 800 and 1200 mesh SiC abrasive papers. The obtained wear values were implemented in an artificial neural network (ANN) model having three inputs and one output using feed forward backpropagation Levenberg-Marquardt algorithm. Thus, the optimum wear conditions and hardness values were determined.

  19. Modeling the time-changing dependence in stock markets

    International Nuclear Information System (INIS)

    Frezza, Massimiliano

    2012-01-01

    The time-changing dependence in stock markets is investigated by assuming the multifractional process with random exponent (MPRE) as model for actual log price dynamics. By modeling its functional parameter S(t, ω) via the square root process (S.R.) a twofold aim is obtained. From one hand both the main financial and statistical properties shown by the estimated S(t) are captured by surrogates, on the other hand this capability reveals able to model the time-changing dependence shown by stocks or indexes. In particular, a new dynamical approach to interpreter market mechanisms is given. Empirical evidences are offered by analysing the behaviour of the daily closing prices of a very known index, the Industrial Average Dow Jones (DJIA), beginning on March,1990 and ending on February, 2005.

  20. New time scale based k-epsilon model for near-wall turbulence

    Science.gov (United States)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  1. Stability and the structure of continuous-time economic models

    NARCIS (Netherlands)

    Nieuwenhuis, H.J.; Schoonbeek, L.

    In this paper we investigate the relationship between the stability of macroeconomic, or macroeconometric, continuous-time models and the structure of the matrices appearing in these models. In particular, we concentrate on dominant-diagonal structures. We derive general stability results for models

  2. Using a thermal-based two source energy balance model with time-differencing to estimate surface energy fluxes with day-night MODIS observations

    DEFF Research Database (Denmark)

    Guzinski, Radoslaw; Anderson, M.C.; Kustas, W.P.

    2013-01-01

    The Dual Temperature Difference (DTD) model, introduced by Norman et al. (2000), uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST) to estimate surface energy fluxes. By using a time-differential temperature...... agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the Danish Hydrological ObsErvatory (HOBE) in western Denmark, indicating realistic patterns based on land use....

  3. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    Science.gov (United States)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  4. Modelling and analysis of real-time and hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Olivero, A

    1994-09-29

    This work deals with the modelling and analysis of real-time and hybrid systems. We first present the timed-graphs as model for the real-time systems and we recall the basic notions of the analysis of real-time systems. We describe the temporal properties on the timed-graphs using TCTL formulas. We consider two methods for property verification: in one hand we study the symbolic model-checking (based on backward analysis) and in the other hand we propose a verification method derived of the construction of the simulation graph (based on forward analysis). Both methods have been implemented within the KRONOS verification tool. Their application for the automatic verification on several real-time systems confirms the practical interest of our approach. In a second part we study the hybrid systems, systems combining discrete components with continuous ones. As in the general case the analysis of this king of systems is not decidable, we identify two sub-classes of hybrid systems and we give a construction based method for the generation of a timed-graph from an element into the sub-classes. We prove that in one case the timed-graph obtained is bi-similar with the considered system and that there exists a simulation in the other case. These relationships allow the application of the described technics on the hybrid systems into the defined sub-classes. (authors). 60 refs., 43 figs., 8 tabs., 2 annexes.

  5. Using forecast modelling to evaluate treatment effects in single-group interrupted time series analysis.

    Science.gov (United States)

    Linden, Ariel

    2018-05-11

    Interrupted time series analysis (ITSA) is an evaluation methodology in which a single treatment unit's outcome is studied serially over time and the intervention is expected to "interrupt" the level and/or trend of that outcome. ITSA is commonly evaluated using methods which may produce biased results if model assumptions are violated. In this paper, treatment effects are alternatively assessed by using forecasting methods to closely fit the preintervention observations and then forecast the post-intervention trend. A treatment effect may be inferred if the actual post-intervention observations diverge from the forecasts by some specified amount. The forecasting approach is demonstrated using the effect of California's Proposition 99 for reducing cigarette sales. Three forecast models are fit to the preintervention series-linear regression (REG), Holt-Winters (HW) non-seasonal smoothing, and autoregressive moving average (ARIMA)-and forecasts are generated into the post-intervention period. The actual observations are then compared with the forecasts to assess intervention effects. The preintervention data were fit best by HW, followed closely by ARIMA. REG fit the data poorly. The actual post-intervention observations were above the forecasts in HW and ARIMA, suggesting no intervention effect, but below the forecasts in the REG (suggesting a treatment effect), thereby raising doubts about any definitive conclusion of a treatment effect. In a single-group ITSA, treatment effects are likely to be biased if the model is misspecified. Therefore, evaluators should consider using forecast models to accurately fit the preintervention data and generate plausible counterfactual forecasts, thereby improving causal inference of treatment effects in single-group ITSA studies. © 2018 John Wiley & Sons, Ltd.

  6. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    Science.gov (United States)

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  7. The burning fuse model of unbecoming in time

    Science.gov (United States)

    Norton, John D.

    2015-11-01

    In the burning fuse model of unbecoming in time, the future is real and the past is unreal. It is used to motivate the idea that there is something unbecoming in the present literature on the metaphysics of time: its focus is merely the assigning of a label "real."

  8. Spectrophotometric analysis of tomato plants produced from seeds exposed under space flight conditions for a long time

    Science.gov (United States)

    Nechitailo, Galina S.; Yurov, S.; Cojocaru, A.; Revin, A.

    The analysis of the lycopene and other carotenoids in tomatoes produced from seeds exposed under space flight conditions at the orbital station MIR for six years is presented in this work. Our previous experiments with tomato plants showed the germination of seeds to be 32%Genetic investigations revealed 18%in the experiment and 8%experiments were conducted to study the capacity of various stimulating factors to increase germination of seeds exposed for a long time to the action of space flight factors. An increase of 20%achieved but at the same time mutants having no analogues in the control variants were detected. For the present investigations of the third generation of plants produced from seeds stored for a long time under space flight conditions 80 tomatoes from forty plants were selected. The concentration of lycopene in the experimental specimens was 2.5-3 times higher than in the control variants. The spectrophotometric analysis of ripe tomatoes revealed typical three-peaked carotenoid spectra with a high maximum of lycopene (a medium maximum at 474 nm), a moderate maximum of its predecessor, phytoin, (a medium maximum at 267 nm) and a low maximum of carotenes. In green tomatoes, on the contrary, a high maximum of phytoin, a moderate maximum of lycopene and a low maximum of carotenes were observed. The results of the spectral analysis point to the retardation of biosynthesis of carotenes while the production of lycopene is increased and to the synthesis of lycopene from phytoin. Electric conduction of tomato juice in the experimental samples is increased thus suggesting higher amounts of carotenoids, including lycopene and electrolytes. The higher is the value of electric conduction of a specimen, the higher are the spectral maxima of lycopene. The hydrogen ion exponent of the juice of ripe tomatoes increases due to which the efficiency of ATP biosynthesis in cell mitochondria is likely to increase, too. The results demonstrating an increase in the content

  9. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.

    Science.gov (United States)

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.

  10. Interactive, open source, travel time scenario modelling: tools to facilitate participation in health service access analysis.

    Science.gov (United States)

    Fisher, Rohan; Lassa, Jonatan

    2017-04-18

    Modelling travel time to services has become a common public health tool for planning service provision but the usefulness of these analyses is constrained by the availability of accurate input data and limitations inherent in the assumptions and parameterisation. This is particularly an issue in the developing world where access to basic data is limited and travel is often complex and multi-modal. Improving the accuracy and relevance in this context requires greater accessibility to, and flexibility in, travel time modelling tools to facilitate the incorporation of local knowledge and the rapid exploration of multiple travel scenarios. The aim of this work was to develop simple open source, adaptable, interactive travel time modelling tools to allow greater access to and participation in service access analysis. Described are three interconnected applications designed to reduce some of the barriers to the more wide-spread use of GIS analysis of service access and allow for complex spatial and temporal variations in service availability. These applications are an open source GIS tool-kit and two geo-simulation models. The development of these tools was guided by health service issues from a developing world context but they present a general approach to enabling greater access to and flexibility in health access modelling. The tools demonstrate a method that substantially simplifies the process for conducting travel time assessments and demonstrate a dynamic, interactive approach in an open source GIS format. In addition this paper provides examples from empirical experience where these tools have informed better policy and planning. Travel and health service access is complex and cannot be reduced to a few static modeled outputs. The approaches described in this paper use a unique set of tools to explore this complexity, promote discussion and build understanding with the goal of producing better planning outcomes. The accessible, flexible, interactive and

  11. Growth model of the pineapple guava fruit as a function of thermal time and altitude

    Directory of Open Access Journals (Sweden)

    Alfonso Parra-Coronado

    2016-09-01

    Full Text Available The growth of the pineapple guava fruit is primarily stimulated by temperature but is also influenced by other climactic factors, such as altitude. The goal of this study was to develop a growth model for the pineapple guava fruit as a function of thermal time (GDD, growing-degree day and altitude (H of the production area. Twenty trees per farm were marked in two sites in the Cundinamarca department (Colombia during the 2012 and 2014 seasons. The measurements were performed every seven days after day 96 and 99 post-anthesis until harvest in the sites of Tenjo (2,580 m.a.s.l. and San Francisco de Sales (1,800 m.a.s.l., respectively. A growth model was produced for weight as a function of fruit length and diameter as well as for the weight of the fruit as a function of GDD and H, with this last measure adjusted to a sigmoidal logistic growth model. The parameters for the regression analysis showed that the models satisfactorily predicted fruit growth for both of the sites, with a high determination coefficient. The cross-validation showed good statistical fit between the predicted and observed models; the intercept was not significantly different than zero, and the slope was statistically equal to one.

  12. Simulations of radiocarbon in a coarse-resolution world ocean model 2. Distributions of bomb-produced Carbon 14

    International Nuclear Information System (INIS)

    Toggweiler, J.R.; Dixon, K.; Bryan, K.

    1989-01-01

    Part 1 of this study examined the ability of the Geophysical Fluid Dynamics Laboratory (GFDL) primitive equation ocean general circulation model to simulate the steady state distribution of naturally produced 14 C in the ocean prior to the nuclear bomb tests of the 1950's and early 1960's. In part 2 begin with the steady state distributions of part 1 and subject the model to the pulse of elevated atmospheric 14 C concentrations observed since the 1950's

  13. A supply function model for representing the strategic bidding of the producers in constrained electricity markets

    International Nuclear Information System (INIS)

    Bompard, Ettore; Napoli, Roberto; Lu, Wene; Jiang, Xiuchen

    2010-01-01

    The modeling of the bidding behaviour of the producer is a key-point in the modeling and simulation of the competitive electricity markets. In our paper, the linear supply function model is applied so as to find the Supply Function Equilibrium analytically. It also proposed a new and efficient approach to find SFEs for the network constrained electricity markets by finding the best slope of the supply function with the help of changing the intercept, and the method can be applied on the large systems. The approach proposed is applied to study IEEE-118 bus test systems and the comparison between bidding slope and bidding intercept is presented, as well, with reference to the test system. (author)

  14. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  15. Research on Coordination of Fresh Produce Supply Chain in Big Market Sales Environment

    Directory of Open Access Journals (Sweden)

    Juning Su

    2014-01-01

    Full Text Available In this paper, we propose two decision models for decentralized and centralized fresh produce supply chains with stochastic supply and demand and controllable transportation time. The optimal order quantity and the optimal transportation time in these two supply chain systems are derived. To improve profits in a decentralized supply chain, based on analyzing the risk taken by each participant in the supply chain, we design a set of contracts which can coordinate this type of fresh produce supply chain with stochastic supply and stochastic demand, and controllable transportation time as well. We also obtain a value range of contract parameters that can increase profits of all participants in the decentralized supply chain. The expected profits of the decentralized setting and the centralized setting are compared with respect to given numerical examples. Furthermore, the sensitivity analyses of the deterioration rate factor and the freshness factor are performed. The results of numerical examples show that the transportation time is shorter, the order quantity is smaller, the total profit of whole supply chain is less, and the possibility of cooperation between supplier and retailer is higher for the fresh produce which is more perishable and its quality decays more quickly.

  16. Testing the time-of-flight model for flagellar length sensing.

    Science.gov (United States)

    Ishikawa, Hiroaki; Marshall, Wallace F

    2017-11-07

    Cilia and flagella are microtubule-based organelles that protrude from the surface of most cells, are important to the sensing of extracellular signals, and make a driving force for fluid flow. Maintenance of flagellar length requires an active transport process known as intraflagellar transport (IFT). Recent studies reveal that the amount of IFT injection negatively correlates with the length of flagella. These observations suggest that a length-dependent feedback regulates IFT. However, it is unknown how cells recognize the length of flagella and control IFT. Several theoretical models try to explain this feedback system. We focused on one of the models, the "time-of-flight" model, which measures the length of flagella on the basis of the travel time of IFT protein in the flagellar compartment. We tested the time-of-flight model using Chlamydomonas dynein mutant cells, which show slower retrograde transport speed. The amount of IFT injection in dynein mutant cells was higher than that in control cells. This observation does not support the prediction of the time-of-flight model and suggests that Chlamydomonas uses another length-control feedback system rather than that described by the time-of-flight model. © 2017 Ishikawa and Marshall. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  17. A four-stage hybrid model for hydrological time series forecasting.

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  18. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  19. Rapid and accurate identification by real-time PCR of biotoxin-producing dinoflagellates from the family gymnodiniaceae.

    Science.gov (United States)

    Smith, Kirsty F; de Salas, Miguel; Adamson, Janet; Rhodes, Lesley L

    2014-03-07

    The identification of toxin-producing dinoflagellates for monitoring programmes and bio-compound discovery requires considerable taxonomic expertise. It can also be difficult to morphologically differentiate toxic and non-toxic species or strains. Various molecular methods have been used for dinoflagellate identification and detection, and this study describes the development of eight real-time polymerase chain reaction (PCR) assays targeting the large subunit ribosomal RNA (LSU rRNA) gene of species from the genera Gymnodinium, Karenia, Karlodinium, and Takayama. Assays proved to be highly specific and sensitive, and the assay for G. catenatum was further developed for quantification in response to a bloom in Manukau Harbour, New Zealand. The assay estimated cell densities from environmental samples as low as 0.07 cells per PCR reaction, which equated to three cells per litre. This assay not only enabled conclusive species identification but also detected the presence of cells below the limit of detection for light microscopy. This study demonstrates the usefulness of real-time PCR as a sensitive and rapid molecular technique for the detection and quantification of micro-algae from environmental samples.

  20. Detection and modelling of time-dependent QTL in animal populations

    DEFF Research Database (Denmark)

    Lund, Mogens S; Sørensen, Peter; Madsen, Per

    2008-01-01

    A longitudinal approach is proposed to map QTL affecting function-valued traits and to estimate their effect over time. The method is based on fitting mixed random regression models. The QTL allelic effects are modelled with random coefficient parametric curves and using a gametic relationship...... matrix. A simulation study was conducted in order to assess the ability of the approach to fit different patterns of QTL over time. It was found that this longitudinal approach was able to adequately fit the simulated variance functions and considerably improved the power of detection of time-varying QTL...... effects compared to the traditional univariate model. This was confirmed by an analysis of protein yield data in dairy cattle, where the model was able to detect QTL with high effect either at the beginning or the end of the lactation, that were not detected with a simple 305 day model....

  1. Time-dependent inhomogeneous jet models for BL Lac objects

    Science.gov (United States)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-05-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  2. Finite difference time domain solution of electromagnetic scattering on the hypercube

    International Nuclear Information System (INIS)

    Calalo, R.H.; Lyons, J.R.; Imbriale, W.A.

    1988-01-01

    Electromagnetic fields interacting with a dielectric or conducting structure produce scattered electromagnetic fields. To model the fields produced by complicated, volumetric structures, the finite difference time domain (FDTD) method employs an iterative solution to Maxwell's time dependent curl equations. Implementations of the FDTD method intensively use memory and perform numerous calculations per time step iteration. The authors have implemented an FDTD code on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. This code allows to solve problems requiring as many as 2,048,000 unit cells on a 32 node Hypercube. For smaller problems, the code produces solutions in a fraction of the time to solve the same problems on sequential computers

  3. Applying MDA to SDR for Space to Model Real-time Issues

    Science.gov (United States)

    Blaser, Tammy M.

    2007-01-01

    NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.

  4. On the maximum-entropy/autoregressive modeling of time series

    Science.gov (United States)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  5. Modelling of Patterns in Space and Time

    CERN Document Server

    Murray, James

    1984-01-01

    This volume contains a selection of papers presented at the work­ shop "Modelling of Patterns in Space and Time", organized by the 80nderforschungsbereich 123, "8tochastische Mathematische Modelle", in Heidelberg, July 4-8, 1983. The main aim of this workshop was to bring together physicists, chemists, biologists and mathematicians for an exchange of ideas and results in modelling patterns. Since the mathe­ matical problems arising depend only partially on the particular field of applications the interdisciplinary cooperation proved very useful. The workshop mainly treated phenomena showing spatial structures. The special areas covered were morphogenesis, growth in cell cultures, competition systems, structured populations, chemotaxis, chemical precipitation, space-time oscillations in chemical reactors, patterns in flames and fluids and mathematical methods. The discussions between experimentalists and theoreticians were especially interesting and effective. The editors hope that these proceedings reflect ...

  6. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Youngsoo [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.; Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carlberg, Kevin Thomas [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.

    2017-09-01

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over all space and time in a weighted ℓ2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.

  7. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  8. Dynamics model for real time diagnostics of Triga RC-1 system

    International Nuclear Information System (INIS)

    Gadomski, A.M.; Nanni, V.; Meo, G.

    1988-01-01

    This paper presents dynamics model of TRIGA RC-1 reactor system. The model is dedicated to the real-time early fault detection during a reactor operation in one week exploitation cycle. The algorithms are specially suited for real-time, long time and also accelerated simulations with assumed diagnostic oriented accuracy. The approximations, modular structure, numerical methods and validation are discussed. The elaborated model will be build in the TRIGA Supervisor System and TRIGA Diagnostic Simulator

  9. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012

    Science.gov (United States)

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682

  10. A new mouse model for renal lesions produced by intravenous injection of diphtheria toxin A-chain expression plasmid

    Directory of Open Access Journals (Sweden)

    Nakamura Shingo

    2004-04-01

    Full Text Available Abstract Background Various animal models of renal failure have been produced and used to investigate mechanisms underlying renal disease and develop therapeutic drugs. Most methods available to produce such models appear to involve subtotal nephrectomy or intravenous administration of antibodies raised against basement membrane of glomeruli. In this study, we developed a novel method to produce mouse models of renal failure by intravenous injection of a plasmid carrying a toxic gene such as diphtheria toxin A-chain (DT-A gene. DT-A is known to kill cells by inhibiting protein synthesis. Methods An expression plasmid carrying the cytomegalovirus enhancer/chicken β-actin promoter linked to a DT-A gene was mixed with lipid (FuGENE™6 and the resulting complexes were intravenously injected into adult male B6C3F1 mice every day for up to 6 days. After final injection, the kidneys of these mice were sampled on day 4 and weeks 3 and 5. Results H-E staining of the kidney specimens sampled on day 4 revealed remarkable alterations in glomerular compartments, as exemplified by mesangial cell proliferation and formation of extensive deposits in glomerular basement membrane. At weeks 3 and 5, gradual recovery of these tissues was observed. These mice exhibited proteinuria and disease resembling sub-acute glomerulonephritis. Conclusions Repeated intravenous injections of DT-A expression plasmid DNA/lipid complex caused temporary abnormalities mainly in glomeruli of mouse kidney. The disease in these mice resembles sub-acute glomerulonephritis. These DT-A gene-incorporated mice will be useful as animal models in the fields of nephrology and regenerative medicine.

  11. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Constitutive modeling for uniaxial time-dependent ratcheting of SS304 stainless steel

    International Nuclear Information System (INIS)

    Kan Qianhua; Kang Guozheng; Zhang Juan

    2007-01-01

    Based on the experimental results of uniaxial time-dependent ratcheting behavior of SS304 stainless steel at room temperature and 973K, a new time-dependent constitutive model was proposed. The model describes the time-dependent ratcheting by adding a static/thermal recovery into the Abdel-Karim-Ohno non-linear kinematic hardening rule. The capability of the model to describe the time-dependent ratcheting was discussed by comparing the simulations with the corresponding experimental results. It is shown that the revised unified viscoplastic model can simulate the time-dependent ratcheting reasonably both at room and high temperatures. (authors)

  13. Modeling of a Reaction-Distillation-Recycle System to Produce Dimethyl Ether through Methanol Dehydration

    Science.gov (United States)

    Muharam, Y.; Zulkarnain, L. M.; Wirya, A. S.

    2018-03-01

    The increase in the dimethyl ether yield through methanol dehydration due to a recycle integration to a reaction-distillation system was studied in this research. A one-dimensional phenomenological model of a methanol dehydration reactor and a shortcut model of distillation columns were used to achieve the aim. Simulation results show that 10.7 moles/s of dimethyl ether is produced in a reaction-distillation system with the reactor length being 4 m, the reactor inlet pressure being 18 atm, the reactor inlet temperature being 533 K, the reactor inlet velocity being 0.408 m/s, and the distillation pressure being 8 atm. The methanol conversion is 90% and the dimethyl ether yield is 48%. The integration of the recycle stream to the system increases the dimethyl ether yield by 8%.

  14. Monitoring Murder Crime in Namibia Using Bayesian Space-Time Models

    Directory of Open Access Journals (Sweden)

    Isak Neema

    2012-01-01

    Full Text Available This paper focuses on the analysis of murder in Namibia using Bayesian spatial smoothing approach with temporal trends. The analysis was based on the reported cases from 13 regions of Namibia for the period 2002–2006 complemented with regional population sizes. The evaluated random effects include space-time structured heterogeneity measuring the effect of regional clustering, unstructured heterogeneity, time, space and time interaction and population density. The model consists of carefully chosen prior and hyper-prior distributions for parameters and hyper-parameters, with inference conducted using Gibbs sampling algorithm and sensitivity test for model validation. The posterior mean estimate of the parameters from the model using DIC as model selection criteria show that most of the variation in the relative risk of murder is due to regional clustering, while the effect of population density and time was insignificant. The sensitivity analysis indicates that both intrinsic and Laplace CAR prior can be adopted as prior distribution for the space-time heterogeneity. In addition, the relative risk map show risk structure of increasing north-south gradient, pointing to low risk in northern regions of Namibia, while Karas and Khomas region experience long-term increase in murder risk.

  15. Boosting joint models for longitudinal and time-to-event data

    DEFF Research Database (Denmark)

    Waldmann, Elisabeth; Taylor-Robinson, David; Klein, Nadja

    2017-01-01

    Joint models for longitudinal and time-to-event data have gained a lot of attention in the last few years as they are a helpful technique clinical studies where longitudinal outcomes are recorded alongside event times. Those two processes are often linked and the two outcomes should thus be model...

  16. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  17. D Model Visualization Enhancements in Real-Time Game Engines

    Science.gov (United States)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including Direct

  18. A fast surrogate model tailor-made for real time control

    DEFF Research Database (Denmark)

    Borup, Morten; Thrysøe, Cecilie; Arnbjerg-Nielsen, Karsten

    A surrogate model of a detailed hydraulic urban drainage model is created for supplying inflow forecasts to an MPC model for 31 separate locations. The original model is subdivided into 66 relationships extracted from the original model. The surrogate model is 9000 times faster than the original...... model, with just a minor deviation from the original model results....

  19. A Perspective for Time-Varying Channel Compensation with Model-Based Adaptive Passive Time-Reversal

    Directory of Open Access Journals (Sweden)

    Lussac P. MAIA

    2015-06-01

    Full Text Available Underwater communications mainly rely on acoustic propagation which is strongly affected by frequency-dependent attenuation, shallow water multipath propagation and significant Doppler spread/shift induced by source-receiver-surface motion. Time-reversal based techniques offer a low complexity solution to decrease interferences caused by multipath, but a complete equalization cannot be reached (it saturates when maximize signal to noise ratio and these techniques in conventional form are quite sensible to channel variations along the transmission. Acoustic propagation modeling in high frequency regime can yield physical-based information that is potentially useful to channel compensation methods as the passive time-reversal (pTR, which is often employed in Digital Acoustic Underwater Communications (DAUC systems because of its low computational cost. Aiming to overcome the difficulties of pTR to solve time-variations in underwater channels, it is intended to insert physical knowledge from acoustic propagation modeling in the pTR filtering. Investigation is being done by the authors about the influence of channel physical parameters on propagation of coherent acoustic signals transmitted through shallow water waveguides and received in a vertical line array of sensors. Time-variant approach is used, as required to model high frequency acoustic propagation on realistic scenarios, and applied to a DAUC simulator containing an adaptive passive time-reversal receiver (ApTR. The understanding about the effects of changes in physical features of the channel over the propagation can lead to design ApTR filters which could help to improve the communications system performance. This work presents a short extension and review of the paper 12, which tested Doppler distortion induced by source-surface motion and ApTR compensation for a DAUC system on a simulated time-variant channel, in the scope of model-based equalization. Environmental focusing approach

  20. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.