WorldWideScience

Sample records for models produce time

  1. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  2. Prime Time Power: Women Producers, Writers and Directors on TV.

    Science.gov (United States)

    Steenland, Sally

    This report analyzes the number of women working in the following six decision making jobs in prime time television: (1) executive producer; (2) supervising producer; (3) producer; (4) co-producer; (5) writer; and (6) director. The women who hold these positions are able to influence the portrayal of women on television as well as to improve the…

  3. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  4. Producing complex spoken numerals for time and space

    NARCIS (Netherlands)

    Meeuwissen, M.H.W.

    2004-01-01

    This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult

  5. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This phenomenon is known as attentional dwell time (e.g. Duncan, Ward, Shapiro, 1994). All Previous studies of the attentional dwell time have looked at data averaged across subjects. In contrast, we have succeeded in running subjects for 3120 trials which has given us reliable data for modelling data from...... of attentional dwell time extends these mechanisms by proposing that the processing resources (cells) already engaged in a feedback loop (i.e. allocated to an object) are locked in VSTM and therefore cannot be allocated to other objects in the visual field before the encoded object has been released....... This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...

  6. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  7. Probabilistic Survivability Versus Time Modeling

    Science.gov (United States)

    Joyner, James J., Sr.

    2015-01-01

    This technical paper documents Kennedy Space Centers Independent Assessment team work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer (CSO) and GSDO management during key programmatic reviews. The assessments provided the GSDO Program with an analysis of how egress time affects the likelihood of astronaut and worker survival during an emergency. For each assessment, the team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedys Vehicle Assembly Building (VAB).Based on the composite survivability versus time graphs from the first two assessments, there was a soft knee in the Figure of Merit graphs at eight minutes (ten minutes after egress ordered). Thus, the graphs illustrated to the decision makers that the final emergency egress design selected should have the capability of transporting the flight crew from the top of LC 39B to a safe location in eight minutes or less. Results for the third assessment were dominated by hazards that were classified as instantaneous in nature (e.g. stacking mishaps) and therefore had no effect on survivability vs time to egress the VAB. VAB emergency scenarios that degraded over time (e.g. fire) produced survivability vs time graphs that were line with aerospace industry norms.

  8. Effect produced by the charge collection time upon the time and energy resolution of semiconductor detectors

    International Nuclear Information System (INIS)

    Morozov, V.A.; Stegailov, V.I.; Zinov, V.G.; Yashin, S.N.

    1986-01-01

    The effect produced by the charge collection time upon the time and energy resolution of semiconductor detectors has been studied. It is shown that sampling of pulse rise times permits one to identify in coaxial detectors a group of pulses corresponding to the detection of radiation in surface layers of the detector. Choosing pulses with the maximum rise time rate allows one to improve the time resolution of the coincidence sepectrometer and achieve 2tau=1.65 ns, instead of the 2tau=5.50 ns for coincidences of the 1332 keV gamma line and β - particles from 60 Co. (orig.)

  9. Modelling urban travel times

    NARCIS (Netherlands)

    Zheng, F.

    2011-01-01

    Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain

  10. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  11. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  12. Time-Correlated Particles Produced by Cosmic Rays

    Energy Technology Data Exchange (ETDEWEB)

    Chapline, George F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glenn, Andrew M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nakae, Les F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pawelczak, Iwona [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Snyderman, Neal J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sheets, Steven A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wurtz, Ron E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    This report describes the NA-22 supported cosmic ray experimental and analysis activities carried out at LLNL since the last report, dated October 1, 2013. In particular we report on an analysis of the origin of the plastic scintillator signals resembling the signals produced by minimum ionizing particles (MIPs). Our most notable result is that when measured in coincidence with a liquid scintillator neutron signal the MIP-like signals in the plastic scintillators are mainly due to high energy tertiary neutrons.

  13. Distinct timing mechanisms produce discrete and continuous movements.

    Directory of Open Access Journals (Sweden)

    Raoul Huys

    2008-04-01

    Full Text Available The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems to accomplish varying behavioral functions such as speed constraints.

  14. New model of Raman spectra in laser produced plasma

    International Nuclear Information System (INIS)

    Some experimental observations of Raman scattering in laser produced plasma have been previously attributed to the onset of the convective Stimulated Raman Instability (SRS-C). This interpretation has had a number of difficulties, associated with the calculated threshold for onset of the SRS-C, the existence of gaps in the frequency spectrum near the incident frequency ω 0 and near ω 0 /2, and with the angular distribution. We now propose a new explanation based on ordinary incoherent Thompson scattering, with a greatly enhanced plasma line. Transient local reversed-slope velocity distributions in the underdense region can be produced by pulses of hot electrons arising from the two-plasmon (2ω/sub p/) or absolute stimulated Raman instabilities (SRS-A) occurring near the quarter critical surface. A simple model yields the observed spectral gaps near ω 0 and near ω 0 /2. It also explains the correlation of onset of this scattering with onset of the SRS-A, its transient localization in frequency and time, and the weak azimuthal angular variation. The existence of upscattered light is also predicted

  15. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  16. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  17. Probabilistic Survivability Versus Time Modeling

    Science.gov (United States)

    Joyner, James J., Sr.

    2016-01-01

    This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.

  18. Mentoring, Modeling, and Money: The 3 Ms of Producing Academics

    Science.gov (United States)

    Shapiro, Edward S.; Blom-Hoffman, Jessica

    2004-01-01

    The program at Lehigh University has been very successful in producing a high percentage of students (42% of all graduates) who have entered academic careers as trainers of school psychologists. This article presents a conceptual model for the three variables that are considered as critical components of why students select an academic…

  19. Time-resolved spectroscopy of nonequilibrium ionization in laser-produced plasmas

    International Nuclear Information System (INIS)

    Marjoribanks, R.S.

    1988-01-01

    The highly transient ionization characteristic of laser-produced plasmas at high energy densities has been investigated experimentally, using x-ray spectroscopy with time resolution of less than 20 ps. Spectroscopic diagnostics of plasma density and temperature were used, including line ratios, line profile broadening and continuum emission, to characterize the plasma conditions without relying immediately on ionization modeling. The experimentally measured plasma parameters were used as independent variables, driving an ionization code, as a test of ionization modeling, divorced from hydrodynamic calculations. Several state-of-the-art streak spectrographs, each recording a fiducial of the laser peak along with the time-resolved spectrum, characterized the laser heating of thin signature layers of different atomic numbers imbedded in plastic targets. A novel design of crystal spectrograph, with a conically curved crystal, was developed. Coupled with a streak camera, it provided high resolution (λ/ΔΛ > 1000) and a collection efficiency roughly 20-50 times that of planar crystal spectrographs, affording improved spectra for quantitative reduction and greater sensitivity for the diagnosis of weak emitters. Experimental results were compared to hydrocode and ionization code simulations, with poor agreement. The conclusions question the appropriateness of describing electron velocity distributions by a temperature parameter during the time of laser illumination and emphasis the importance of characterizing the distribution more generally

  20. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    Studies of the time course of visual attention have identified a temporary functional blindness to the second of two spatially separated targets: attending to one visual stimulus may lead to impairments in identifying a second stimulus presented between 200 to 500 ms after the first. This phenome......Studies of the time course of visual attention have identified a temporary functional blindness to the second of two spatially separated targets: attending to one visual stimulus may lead to impairments in identifying a second stimulus presented between 200 to 500 ms after the first....... This phenomenon is known as attentional dwell time (e.g. Duncan, Ward, Shapiro, 1994). All Previous studies of the attentional dwell time have looked at data averaged across subjects. In contrast, we have succeeded in running subjects for 3120 trials which has given us reliable data for modelling data from...... individual subjects. Our new model is based on the Theory of Visual Attention (TVA; Bundesen, 1990). TVA has previously been successful in explaining results from experiments where stimuli are presented simultaneously in the spatial domain (e.g. whole report and partial report) but has not yet been extended...

  1. Formation time of hadrons and density of matter produced in relativistic heavy-ion collisions

    International Nuclear Information System (INIS)

    Pisut, J.; Zavada, P.

    1994-06-01

    Densities of interacting hadronic matter produced in Oxygen-Lead and Sulphur-Lead collisions at 200 GeV/nucleon are estimated as a function of the formation time of hadrons. Uncertainties in our knowledge of the critical temperature T c and of the formation time of hadrons τ 0 permit at present three scenarios: an optimistic one (QGP has already been produced in collisions of Oxygen and Sulphur with heavy ions and will be copiously in Lead collisions), a pessimistic one (QGP cannot be produced at 200 GeV/nucleon) and an intermediate one (QGP has not been produced in Oxygen and Sulphur Interactions with heavy ions and will be at best produced only marginally in Pb-collisions). The last option is found to be the most probable. (author)

  2. Modeling terrestrial gamma ray flashes produced by relativistic feedback discharges

    Science.gov (United States)

    Liu, Ningyu; Dwyer, Joseph R.

    2013-05-01

    This paper reports a modeling study of terrestrial gamma ray flashes (TGFs) produced by relativistic feedback discharges. Terrestrial gamma ray flashes are intense energetic radiation originating from the Earth's atmosphere that has been observed by spacecraft. They are produced by bremsstrahlung interactions of energetic electrons, known as runaway electrons, with air atoms. An efficient physical mechanism for producing large fluxes of the runaway electrons to make the TGFs is the relativistic feedback discharge, where seed runaway electrons are generated by positrons and X-rays, products of the discharge itself. Once the relativistic feedback discharge becomes self-sustaining, an exponentially increasing number of relativistic electron avalanches propagate through the same high-field region inside the thundercloud until the electric field is partially discharged by the ionization created by the discharge. The modeling results indicate that the durations of the TGF pulses produced by the relativistic feedback discharge vary from tens of microseconds to several milliseconds, encompassing all durations of the TGFs observed so far. In addition, when a sufficiently large potential difference is available in thunderclouds, a self-propagating discharge known as the relativistic feedback streamer can be formed, which propagates like a conventional positive streamer. For the relativistic feedback streamer, the positive feedback mechanism of runaway electron production by the positrons and X-rays plays a similar role as the photoionization for the conventional positive streamer. The simulation results of the relativistic feedback streamer show that a sequence of TGF pulses with varying durations can be produced by the streamer. The relativistic streamer may initially propagate with a pulsed manner and turn into a continuous propagation mode at a later stage. Milliseconds long TGF pulses can be produced by the feedback streamer during its continuous propagation. However

  3. Analysis of sludge aggregates produced during electrocoagulation of model wastewater.

    Science.gov (United States)

    Załęska-Chróst, B; Wardzyńska, R

    2016-01-01

    This paper presents the results of the study of sludge aggregates produced during electrocoagulation of model wastewater of a composition corresponding to the effluents from the cellulose and paper industry. Wastewater was electrocoagulated statically using aluminium electrodes with a current density of 31.25 A m(-2) and 62.50 A m(-2). In subsequent stages of the treatment, sludge flocs were collected, their size was studied and their floc settling velocity (30-520 μm s(-1)) and fractal dimension (D) were determined. The values of D ranged from 1.53 to 1.95 and were directly proportional to the degree of wastewater treatment. Higher values of D were determined for sludge with lower water content (after 24 hours' settling). Fractal dimension can therefore be used as an additional parameter of wastewater treatment control.

  4. Modelling the oil producers: Capturing oil industry knowledge in a behavioural simulation model

    International Nuclear Information System (INIS)

    Morecroft, J.D.W.; Van der Heijden, K.A.J.M.

    1992-01-01

    A group of senior managers and planners from a major oil company met to discuss the changing structure of the oil industry with the purpose of improving group understanding of oil market behaviour for use in global scenarios. This broad ranging discussion led to a system dynamics simulation model of the oil producers. The model produced new insights into the power and stability of OPEC (the major oil producers' organization), the dynamic of oil prices, and the investment opportunities of non-OPEC producers. The paper traces the model development process, starting from group discussions and leading to working simulation models. Particular attention is paid to the methods used to capture team knowledge and to ensure that the computer models reflected opinions and ideas from the meetings. The paper describes how flip-chart diagrams were used to collect ideas about the logic of the principal producers' production decisions. A sub-group of the project team developed and tested an algebraic model. The paper shows partial model simulations used to build confidence and a sense of ownership in the algebraic formulations. Further simulations show how the full model can stimulate thinking about producers' behaviour and oil prices. The paper concludes with comments on the model building process. 11 figs., 37 refs

  5. Is the Merchant Power Producer a broken model?

    International Nuclear Information System (INIS)

    Nelson, James; Simshauser, Paul

    2013-01-01

    Deregulated energy markets were founded on the Merchant Power Producer, a stand-alone generator that sold its production to the spot and short-term forward markets, underpinned by long-dated project finance. The initial enthusiasm that existed for investment in existing and new merchant power plant capacity shortly after power system deregulation has progressively dissipated, following an excess entry result. In this article, we demonstrate why this has become a global trend. Using debt-sizing parameters typically used by project banks, we model a benchmark plant, then re-simulate its performance using live energy market price data and find that such financings are no longer feasible in the absence of long-term Power Purchase Agreements. - Highlights: ► We model a hypothetical CCGT plant in QLD under project financing constraints typical of the industry. ► We simulate plant operations with live market data to analyse the results. ► We find that a plant which should represent the industry's long-run marginal cost is not a feasible investment.

  6. Time-Resolved Analysis of High-Power-Laser Produced Plasma Expansion in Vacuum

    International Nuclear Information System (INIS)

    Aliverdiev, A.; Batani, D.; Dezulian, R.; Malka, V.; Vinci, T.; Koenig, M.; Benuzzi-Mounaix, A.

    2005-01-01

    Here we consider the results of an experimental investigation of the temporal evolution of plasmas produced by high power laser irradiation of various types of target materials. The experiment was performed at the LULI Laboratory (Ecole Polytechnique, Paris). We have developed a method to analyze time-resolved streak-camera images and analyzed a number of results obtained with various materials

  7. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  8. Late-time particle emission from laser-produced graphite plasma

    International Nuclear Information System (INIS)

    Harilal, S. S.; Hassanein, A.; Polek, M.

    2011-01-01

    We report a late-time ''fireworks-like'' particle emission from laser-produced graphite plasma during its evolution. Plasmas were produced using graphite targets excited with 1064 nm Nd: yttrium aluminum garnet (YAG) laser in vacuum. The time evolution of graphite plasma was investigated using fast gated imaging and visible emission spectroscopy. The emission dynamics of plasma is rapidly changing with time and the delayed firework-like emission from the graphite target followed a black-body curve. Our studies indicated that such firework-like emission is strongly depended on target material properties and explained due to material spallation caused by overheating the trapped gases through thermal diffusion along the layer structures of graphite.

  9. Late-time particle emission from laser-produced graphite plasma

    Energy Technology Data Exchange (ETDEWEB)

    Harilal, S. S.; Hassanein, A.; Polek, M. [School of Nuclear Engineering, Center for Materials Under Extreme Environment, Purdue University, West Lafayette, Indiana 47907 (United States)

    2011-09-01

    We report a late-time ''fireworks-like'' particle emission from laser-produced graphite plasma during its evolution. Plasmas were produced using graphite targets excited with 1064 nm Nd: yttrium aluminum garnet (YAG) laser in vacuum. The time evolution of graphite plasma was investigated using fast gated imaging and visible emission spectroscopy. The emission dynamics of plasma is rapidly changing with time and the delayed firework-like emission from the graphite target followed a black-body curve. Our studies indicated that such firework-like emission is strongly depended on target material properties and explained due to material spallation caused by overheating the trapped gases through thermal diffusion along the layer structures of graphite.

  10. Energy spectra and fluence of the neutrons produced in deformed space-time conditions

    Science.gov (United States)

    Cardone, F.; Rosada, A.

    2016-10-01

    In this work, spectra of energy and fluence of neutrons produced in the conditions of deformed space-time (DST), due to the violation of the local Lorentz invariance (LLI) in the nuclear interactions are shown for the first time. DST-neutrons are produced by a mechanical process in which AISI 304 steel bars undergo a sonication using ultrasounds with 20 kHz and 330 W. The energy spectrum of the DST-neutrons has been investigated both at low (less than 0.4 MeV) and at high (up to 4 MeV) energy. We could conclude that the DST-neutrons have different spectra for different energy intervals. It is therefore possible to hypothesize that the DST-neutrons production presents peculiar features not only with respect to the time (asynchrony) and space (asymmetry) but also in the neutron energy spectra.

  11. Time lags in biological models

    CERN Document Server

    MacDonald, Norman

    1978-01-01

    In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...

  12. RTMOD: Real-Time MODel evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Graziani, G; Galmarini, S. [Joint Research centre, Ispra (Italy); Mikkelsen, T. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept. (Denmark)

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  13. Formal Modeling and Analysis of Timed Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real......-time systems, discrete time systems, timed languages, and real-time operating systems....

  14. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  15. Time-resolved energy spectrum of a pseudospark-produced high-brightness electron beam

    International Nuclear Information System (INIS)

    Myers, T.J.; Ding, B.N.; Rhee, M.J.

    1992-01-01

    The pseudospark, a fast low-pressure gas discharge between a hollow cathode and a planar anode, is found to be an interesting high-brightness electron beam source. Typically, all electron beam produced in the pseudospark has the peak current of ∼1 kA, pulse duration of ∼50 ns, and effective emittance of ∼100 mm-mrad. The energy information of this electron beam, however, is least understood due to the difficulty of measuring a high-current-density beam that is partially space-charge neutralized by the background ions produced in the gas. In this paper, an experimental study of the time-resolved energy spectrum is presented. The pseudospark produced electron beam is injected into a vacuum through a small pinhole so that the electrons without background ions follow single particle motion; the beam is sent through a negative biased electrode and the only portion of beam whose energy is greater than the bias voltage can pass through the electrode and the current is measured by a Faraday cup. The Faraday cup signals with various bias voltage are recorded in a digital oscilloscope. The recorded waveforms are then numerically analyzed to construct a time-resolved energy spectrum. Preliminary results are presented

  16. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  17. Sub-nanosecond plastic scintillator time response studies using laser produced x-ray pulsed excitation

    International Nuclear Information System (INIS)

    Tirsell, K.G.; Tripp, G.R.; Lent, E.M.; Lerche, R.A.; Cheng, J.C.; Hocker, L.; Lyons, P.B.

    1976-01-01

    The light emission time response of quenched NElll plastic scintillators has been measured using a streak camera (20 ps resolution) and 100 to 180 ps, 1.06 μm, laser-produced, pulsed, low energy x-ray excitation. Each light output pulse was obtained by deconvolution from the film data using the x-ray temporal response measured with an x-ray sensitive streak camera (10 ps resolution). Time response parameters are presented for benzophenone and acetophenone, quenching agents which most effectively reduce the decay time of the singlet component. Full width-half-maximums less than or equal to 260 ps were observed for NElll samples quenched with greater than or equal to 2 percent benzophenone. Results are given for unquenched samples consisting of different concentrations of butyl-PBD in PVT and for the phosphor ZnO doped with Ga

  18. Sub-nanosecond plastic scintillator time response studies using laser produced x-ray pulsed excitation

    International Nuclear Information System (INIS)

    Tirsell, K.G.; Tripp, G.R.; Lent, E.M.; Lerche, R.A.; Cheng, J.C.; Hocker, L.; Lyons, P.B.

    1977-01-01

    The light emission time response of quenched NE111 plastic scintillators has been measured using a streak camera (20 ps resolution) and 100 to 180 ps, 1.06 μm, laser-produced, pulsed, low energy x-ray excitation. Each light output pulse was obtained by deconvolution from the film data using the x-ray temporal response measured with an x-ray sensitive streak camera (10 ps resolution). Time response parameters are presented for benzophenone and acetophenone, quenching agents which most effectively reduce the decay time of the singlet component. Full width-half-maximums less than or equal to 260 ps were observed for NE111 samples quenched with greater than or equal to 2 percent benzophenone. Results are given for unquenched samples consisting of different concentrations of butyl-PBD in PVT and for the phosphor ZnO doped with Ga

  19. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  20. MODELLING OF ORDINAL TIME SERIES BY PROPORTIONAL ODDS MODEL

    Directory of Open Access Journals (Sweden)

    Serpil AKTAŞ ALTUNAY

    2013-06-01

    Full Text Available Categorical time series data with random time dependent covariates often arise when the variable categories are assigned as categorical. There are several other models that have been proposed in the literature for the analysis of categorical time series. For example, Markov chain models, integer autoregressive processes, discrete ARMA models can be utilized for modeling of categorical time series. In general, the choice of model depends on the measurement of study variables: nominal, ordinal and interval. However, regression theory is successful approach for categorical time series which is based on generalized linear models and partial likelihood inference. One of the models for ordinal time series in regression theory is proportional odds model. In this study, proportional odds model approach to ordinal categorical time series is investigated based on a real air pollution data set and the results are discussed.

  1. Larger Neural Responses Produce BOLD Signals That Begin Earlier in Time

    Directory of Open Access Journals (Sweden)

    Serena eThompson

    2014-06-01

    Full Text Available Functional MRI analyses commonly rely on the assumption that the temporal dynamics of hemodynamic response functions (HRFs are independent of the amplitude of the neural signals that give rise to them. The validity of this assumption is particularly important for techniques that use fMRI to resolve sub-second timing distinctions between responses, in order to make inferences about the ordering of neural processes. Whether or not the detailed shape of the HRF is independent of neural response amplitude remains an open question, however. We performed experiments in which we measured responses in primary visual cortex (V1 to large, contrast-reversing checkerboards at a range of contrast levels, which should produce varying amounts of neural activity. Ten subjects (ages 22-52 were studied in each of two experiments using 3 Tesla scanners. We used rapid, 250 msec, temporal sampling (repetition time, or TR and both short and long inter-stimulus interval (ISI stimulus presentations. We tested for a systematic relationship between the onset of the HRF and its amplitude across conditions, and found a strong negative correlation between the two measures when stimuli were separated in time (long- and medium-ISI experiments, but not the short-ISI experiment. Thus, stimuli that produce larger neural responses, as indexed by HRF amplitude, also produced HRFs with shorter onsets. The relationship between amplitude and latency was strongest in voxels with lowest mean-normalized variance (i.e., parenchymal voxels. The onset differences observed in the longer-ISI experiments are likely attributable to mechanisms of neurovascular coupling, since they are substantially larger than reported differences in the onset of action potentials in V1 as a function of response amplitude.

  2. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic......In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...

  3. CANFIS: A non-linear regression procedure to produce statistical air-quality forecast models

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, W.R.; Montpetit, J. [Environment Canada, Downsview, Ontario (Canada). Meteorological Research Branch; Pudykiewicz, J. [Environment Canada, Dorval, Quebec (Canada)

    1997-12-31

    Statistical models for forecasts of environmental variables can provide a good trade-off between significance and precision in return for substantial saving of computer execution time. Recent non-linear regression techniques give significantly increased accuracy compared to traditional linear regression methods. Two are Classification and Regression Trees (CART) and the Neuro-Fuzzy Inference System (NFIS). Both can model predict and distributions, including the tails, with much better accuracy than linear regression. Given a learning data set of matched predict and predictors, CART regression produces a non-linear, tree-based, piecewise-continuous model of the predict and data. Its variance-minimizing procedure optimizes the task of predictor selection, often greatly reducing initial data dimensionality. NFIS reduces dimensionality by a procedure known as subtractive clustering but it does not of itself eliminate predictors. Over-lapping coverage in predictor space is enhanced by NFIS with a Gaussian membership function for each cluster component. Coefficients for a continuous response model based on the fuzzified cluster centers are obtained by a least-squares estimation procedure. CANFIS is a two-stage data-modeling technique that combines the strength of CART to optimize the process of selecting predictors from a large pool of potential predictors with the modeling strength of NFIS. A CANFIS model requires negligible computer time to run. CANFIS models for ground-level O{sub 3}, particulates, and other pollutants will be produced for each of about 100 Canadian sites. The air-quality models will run twice daily using a small number of predictors isolated from a large pool of upstream and local Lagrangian potential predictors.

  4. Producing Coordinate Time Series for Iraq's CORS Site for Detection Geophysical Phenomena

    Directory of Open Access Journals (Sweden)

    Oday Yaseen Mohamed Zeki Alhamadani

    2018-01-01

    Full Text Available Global Navigation Satellite Systems (GNSS have become an integral part of wide range of applications. One of these applications of GNSS is implementation of the cellular phone to locate the position of users and this technology has been employed in social media applications. Moreover, GNSS have been effectively employed in transportation, GIS, mobile satellite communications, and etc. On the other hand, the geomatics sciences use the GNSS for many practical and scientific applications such as surveying and mapping and monitoring, etc. In this study, the GNSS raw data of ISER CORS, which is located in the North of Iraq, are processed and analyzed to build up coordinate time series for the purpose of detection the Arabian tectonic plate motion over seven years and a half. Such coordinates time series have been produced very efficiently using GNSS Precise Point Positioning (PPP. The daily PPP results were processed, analyzed, and presented as coordinate time series using GPS Interactive Time Series Analysis. Furthermore, MATLAB (V.2013a is used in this study to computerize GITSA with Graphic User Interface (GUI. The objective of this study was to investigate both of the homogeneity and consistency of the Iraq CORSs GNSS raw data for detection any geophysical changes over long period of time. Additionally, this study aims to employ free online PPP services, such as CSRS_PPP software, for processing GNSS raw data for generation GNSS coordinate time series. The coordinate time series of ISER station showed a +20.9 mm per year, +27.2 mm per year, and -11.3 mm per year in the East, North, and up-down components, respectively. These findings showed a remarkable similarity with those obtained by long-term monitoring of Earth's crust deformation and movement based on global studies and this highlights the importance of using GNSS for monitoring the movement of tectonic plate motion based on CORS and online GNSS data processing services over long period of

  5. Discounting Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....

  6. Real time natural object modeling framework

    International Nuclear Information System (INIS)

    Rana, H.A.; Shamsuddin, S.M.; Sunar, M.H.

    2008-01-01

    CG (Computer Graphics) is a key technology for producing visual contents. Currently computer generated imagery techniques are being developed and applied, particularly in the field of virtual reality applications, film production, training and flight simulators, to provide total composition of realistic computer graphic images. Natural objects like clouds are an integral feature of the sky without them synthetic outdoor scenes seem unrealistic. Modeling and animating such objects is a difficult task. Most systems are difficult to use, as they require adjustment of numerous, complex parameters and are non-interactive. This paper presents an intuitive, interactive system to artistically model, animate, and render visually convincing clouds using modern graphics hardware. A high-level interface models clouds through the visual use of cubes. Clouds are rendered by making use of hardware accelerated API -OpenGL. The resulting interactive design and rendering system produces perceptually convincing cloud models that can be used in any interactive system. (author)

  7. Towards a Computational Model of a Methane Producing Archaeum

    Directory of Open Access Journals (Sweden)

    Joseph R. Peterson

    2014-01-01

    Full Text Available Progress towards a complete model of the methanogenic archaeum Methanosarcina acetivorans is reported. We characterized size distribution of the cells using differential interference contrast microscopy, finding them to be ellipsoidal with mean length and width of 2.9 μm and 2.3 μm, respectively, when grown on methanol and 30% smaller when grown on acetate. We used the single molecule pull down (SiMPull technique to measure average copy number of the Mcr complex and ribosomes. A kinetic model for the methanogenesis pathways based on biochemical studies and recent metabolic reconstructions for several related methanogens is presented. In this model, 26 reactions in the methanogenesis pathways are coupled to a cell mass production reaction that updates enzyme concentrations. RNA expression data (RNA-seq measured for cell cultures grown on acetate and methanol is used to estimate relative protein production per mole of ATP consumed. The model captures the experimentally observed methane production rates for cells growing on methanol and is most sensitive to the number of methyl-coenzyme-M reductase (Mcr and methyl-tetrahydromethanopterin:coenzyme-M methyltransferase (Mtr proteins. A draft transcriptional regulation network based on known interactions is proposed which we intend to integrate with the kinetic model to allow dynamic regulation.

  8. Modeling the Technological Process for Harvesting of Agricultural Produce

    Science.gov (United States)

    Shepelev, S. D.; Shepelev, V. D.; Almetova, Z. V.; Shepeleva, N. P.; Cheskidov, M. V.

    2018-01-01

    The efficiency and the parameters of harvesting as a technological process being substantiated make it possible to reduce the cost of production and increase the profit of enterprises. To increase the efficiency of combine harvesters when the level of technical equipment declines is possible due to their efficient operating modes within daily and every season. Therefore, the correlation between the operational daily time and the seasonal load of combine harvesters is found, with the increase in the seasonal load causing the prolonged duration of operational daily time for harvesters being determined. To increase the efficient time of the seasonal load is possible due to a reasonable ratio of crop varieties according to their ripening periods, the necessary quantity of machines thereby to be reduced up to 40%. By timing and field testing the operational factor of the useful shift time of combine harvesters and the efficient modes of operating machines are defined, with the alternatives for improving the technical readiness of combine harvesters being identified.

  9. Spectral modeling of laser-produced underdense titanium plasmas

    Science.gov (United States)

    Chung, Hyun-Kyung; Back, Christina A.; Scott, Howard A.; Constantin, Carmen; Lee, Richard W.

    2004-11-01

    Experiments were performed at the NIKE laser to create underdense low-Z plasmas with a small amount of high-Z dopant in order to study non-LTE population kinetics. An absolutely calibrated spectra in 470-3000 eV was measured in time-resolved and time-averaged fashion from SiO2 aerogel target with 3% Ti dopant. K-shell Ti emission was observed as well as L-shell Ti emission. Time-resolved emission show that lower energy photons peak later than higher energy photons due to plasma cooling. In this work, we compare the measured spectra with non-LTE spectral calculations of titanium emission at relatively low temperatures distributions dominated by L-shell ions will be discussed.

  10. In Vitro-Produced Pancreas Organogenesis Models In Three Dimensions

    DEFF Research Database (Denmark)

    Greggio, Chiara; De Franceschi, Filippo; Grapin-Botton, Anne

    2015-01-01

    of miniature organs in a dish and are emerging for the pancreas, starting from embryonic progenitors and adult cells. This review focusses on the currently available systems and how these allow new types of questions to be addressed. We discuss the expected advancements including their potential to study human......Three dimensional models of organ biogenesis have recently flourished. They promote a balance between stem/progenitor cell expansion and differentiation without the constraints of flat tissue culture vessels, allowing for autonomous self-organization of cells. Such models allow the formation...

  11. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  12. The Whole Shebang: How Science Produced the Big Bang Model.

    Science.gov (United States)

    Ferris, Timothy

    2002-01-01

    Offers an account of the accumulation of evidence that has led scientists to have confidence in the big bang theory of the creation of the universe. Discusses the early work of Ptolemy, Copernicus, Kepler, Galileo, and Newton, noting the rise of astrophysics, and highlighting the birth of the big bang model (the cosmic microwave background theory…

  13. Survey of time preference, delay discounting models

    Directory of Open Access Journals (Sweden)

    John R. Doyle

    2013-03-01

    Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade

  14. Inhibition of Dermatophilus congolensis infection in a mouse model by antibiotic-producing staphylococci.

    OpenAIRE

    Noble, W. C.; Lloyd, D. H.; Appiah, S. N.

    1980-01-01

    In an acute model of skin infection with Dermatophilus congolensis in the mouse, lesions can be prevented by simultaneous application of staphylococci which produce antibiotics; non-producer staphylococci fail to inhibit lesion formation.

  15. Inhibition of Dermatophilus congolensis infection in a mouse model by antibiotic-producing staphylococci.

    Science.gov (United States)

    Noble, W C; Lloyd, D H; Appiah, S N

    1980-12-01

    In an acute model of skin infection with Dermatophilus congolensis in the mouse, lesions can be prevented by simultaneous application of staphylococci which produce antibiotics; non-producer staphylococci fail to inhibit lesion formation.

  16. Real-time isothermal detection of Shiga toxin-producing Escherichia coli using recombinase polymerase amplification.

    Science.gov (United States)

    Murinda, Shelton E; Ibekwe, A Mark; Zulkaffly, Syaizul; Cruz, Andrew; Park, Stanley; Razak, Nur; Paudzai, Farah Md; Ab Samad, Liana; Baquir, Khairul; Muthaiyah, Kokilah; Santiago, Brenna; Rusli, Amirul; Balkcom, Sean

    2014-07-01

    Shiga toxin-producing Escherichia coli (STEC) are a major family of foodborne pathogens of public health, zoonotic, and economic significance in the United States and worldwide. To date, there are no published reports on use of recombinase polymerase amplification (RPA) for STEC detection. The primary goal of this study was to assess the potential application of RPA in detection of STEC. This study focused on designing and evaluating RPA primers and fluorescent probes for isothermal (39°C) detection of STEC. Compatible sets of candidate primers and probes were designed for detection of Shiga toxin 1 and 2 (Stx1 and 2), respectively. The sets were evaluated for specificity and sensitivity against STEC (n=12) of various stx genotypes (stx1/stx2, stx1, or stx2, respectively), including non-Stx-producing E. coli (n=28) and other genera (n=7). The primers and probes that were designed targeted amplification of the subunit A moiety of stx1 and stx2. The assay detected STEC in real time (within 5-10 min at 39°C) with high sensitivity (93.5% vs. 90%; stx1 vs. stx2), specificity (99.1% vs. 100%; stx1 vs. stx2), and predictive value (97.9% for both stx1 vs. stx2). Limits of detection of ∼ 5-50 colony-forming units/mL were achieved in serially diluted cultures grown in brain heart infusion broth. This study successfully demonstrated for the first time that RPA can be used for isothermal real-time detection of STEC.

  17. 3D Modelling and Printing Technology to Produce Patient-Specific 3D Models.

    Science.gov (United States)

    Birbara, Nicolette S; Otton, James M; Pather, Nalini

    2017-11-10

    A comprehensive knowledge of mitral valve (MV) anatomy is crucial in the assessment of MV disease. While the use of three-dimensional (3D) modelling and printing in MV assessment has undergone early clinical evaluation, the precision and usefulness of this technology requires further investigation. This study aimed to assess and validate 3D modelling and printing technology to produce patient-specific 3D MV models. A prototype method for MV 3D modelling and printing was developed from computed tomography (CT) scans of a plastinated human heart. Mitral valve models were printed using four 3D printing methods and validated to assess precision. Cardiac CT and 3D echocardiography imaging data of four MV disease patients was used to produce patient-specific 3D printed models, and 40 cardiac health professionals (CHPs) were surveyed on the perceived value and potential uses of 3D models in a clinical setting. The prototype method demonstrated submillimetre precision for all four 3D printing methods used, and statistical analysis showed a significant difference (pprinted models, particularly using multiple print materials, were considered useful by CHPs for preoperative planning, as well as other applications such as teaching and training. This study suggests that, with further advances in 3D modelling and printing technology, patient-specific 3D MV models could serve as a useful clinical tool. The findings also highlight the potential of this technology to be applied in a variety of medical areas within both clinical and educational settings. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  18. Eliminating time dispersion from seismic wave modeling

    Science.gov (United States)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  19. Real-time Social Internet Data to Guide Forecasting Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Valle, Sara Y. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-20

    Our goal is to improve decision support by monitoring and forecasting events using social media, mathematical models, and quantifying model uncertainty. Our approach is real-time, data-driven forecasts with quantified uncertainty: Not just for weather anymore. Information flow from human observations of events through an Internet system and classification algorithms is used to produce quantitatively uncertain forecast. In summary, we want to develop new tools to extract useful information from Internet data streams, develop new approaches to assimilate real-time information into predictive models, validate approaches by forecasting events, and our ultimate goal is to develop an event forecasting system using mathematical approaches and heterogeneous data streams.

  20. Independent power producer parallel operation modeling in transient network simulations for interconnected distributed generation studies

    Energy Technology Data Exchange (ETDEWEB)

    Moura, Fabricio A.M.; Camacho, Jose R. [Universidade Federal de Uberlandia, School of Electrical Engineering, Rural Electricity and Alternative Sources Lab, PO Box 593, 38400.902 Uberlandia, MG (Brazil); Chaves, Marcelo L.R.; Guimaraes, Geraldo C. [Universidade Federal de Uberlandia, School of Electrical Engineering, Power Systems Dynamics Group, PO Box: 593, 38400.902 Uberlandia, MG (Brazil)

    2010-02-15

    The main task in this paper is to present a performance analysis of a distribution network in the presence of an independent power producer (IP) synchronous generator with its speed governor and voltage regulator modeled using TACS -Transient Analysis of Control Systems, for distributed generation studies. Regulators were implemented through their transfer functions in the S domain. However, since ATP-EMTP (Electromagnetic Transient Program) works in the time domain, a discretization is necessary to return the TACS output to time domain. It must be highlighted that this generator is driven by a steam turbine, and the whole system with regulators and the equivalent of the power authority system at the common coupling point (CCP) are modeled in the ''ATP-EMTP -Alternative Transients Program''. (author)

  1. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian Søndergaard

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini-world...... are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  2. A new model to produce sagittal plane rotational induced diffuse axonal injuries

    Directory of Open Access Journals (Sweden)

    Johan eDavidsson

    2011-06-01

    Full Text Available A new in vivo animal model that produces diffuse brain injuries (DBI in sagittal plane rearward rotational acceleration has been developed. In this model, the skull of an anesthetized adult rat is tightly secured to a rotating bar. During trauma, the bar is impacted by a striker that causes the bar and the animal head to rotate rearward; the acceleration phase last 0.4 ms and is followed by a rotation at constant speed and a gentle deceleration when the bar makes contact with a padded stop. The total head angle change is less than 30 degrees. By adjusting the air pressure in the rifle used to accelerate the striker, resulting rotational acceleration between 0.3 and 2.1 Mrad/s2 can be produced.Numerous combinations of trauma levels, post-trauma survival times, brain and serum retrieval and tissue preparation techniques were adopted to characterise this new model. The trauma caused subdural bleedings in animals exposed to severe trauma. Staining brain tissue with β-Amyloid Precursor Protein antibodies and FD Neurosilver that detect degenerating axons revealed wide spread axonal injuries (AI in the corpus callosum, the border between the corpus callosum and cortex and in tracts in the brain stem. The observed AI:s were apparent only when the rotational acceleration level was moderate and above. On the contrary, only limited signs of contusion injuries were observed following trauma. S100 serum analyses indicate that blood vessel and glia cell injuries occur following moderate levels of trauma despite the absence of obvious BBB injuries. We conclude that this rotational trauma model is capable of producing graded axonal injury, is repeatable and produces limited other types of traumatic brain injuries (TBI and as such is useful in the study of injury biomechanics, diagnostics and treatment strategies following diffuse axonal injury (DAI.

  3. A new model to produce sagittal plane rotational induced diffuse axonal injuries.

    Science.gov (United States)

    Davidsson, Johan; Risling, Marten

    2011-01-01

    A new in vivo animal model that produces diffuse brain injuries in sagittal plane rearward rotational acceleration has been developed. In this model, the skull of an anesthetized adult rat is tightly secured to a rotating bar. During trauma, the bar is impacted by a striker that causes the bar and the animal head to rotate rearward; the acceleration phase last 0.4 ms and is followed by a rotation at constant speed and a gentle deceleration when the bar makes contact with a padded stop. The total head angle change is less than 30°. By adjusting the air pressure in the rifle used to accelerate the striker, resulting rotational acceleration between 0.3 and 2.1 Mrad/s(2) can be produced. Numerous combinations of trauma levels, post-trauma survival times, brain and serum retrieval, and tissue preparation techniques were adopted to characterize this new model. The trauma caused subdural bleedings in animals exposed to severe trauma. Staining brain tissue with β-Amyloid Precursor Protein antibodies and FD Neurosilver that detect degenerating axons revealed wide spread axonal injuries (AI) in the corpus callosum, the border between the corpus callosum and cortex and in tracts in the brain stem. The observed AIs were apparent only when the rotational acceleration level was moderate and above. On the contrary, only limited signs of contusion injuries were observed following trauma. Macrophage invasions, glial fibrillary acidic protein redistribution or hypertrophy, and blood brain barrier (BBB) changes were unusual. S100 serum analyses indicate that blood vessel and glia cell injuries occur following moderate levels of trauma despite the absence of obvious BBB injuries. We conclude that this rotational trauma model is capable of producing graded axonal injury, is repeatable and produces limited other types of traumatic brain injuries and as such is useful in the study of injury biomechanics, diagnostics, and treatment strategies following diffuse axonal injury.

  4. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...... developed inner-outer factorization technique. Compared to the other analogous counterparts, the proposed method shows to provide more accurate results in terms of time weighted norms, when applied to different practical examples. The results are further illustrated by a numerical example....

  5. Chronic ethanol exposure produces time- and brain region-dependent changes in gene coexpression networks.

    Directory of Open Access Journals (Sweden)

    Elizabeth A Osterndorff-Kahanek

    Full Text Available Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY, nucleus accumbens (NAC, prefrontal cortex (PFC, and liver after four weekly cycles of chronic intermittent ethanol (CIE vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes (2,000-3,000 at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600. Within each region, there was little gene overlap across time (~20%. All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global 'rewiring' of coexpression systems involving glial and immune signaling as well as neuronal genes.

  6. An Animal Model Using Metallic Ions to Produce Autoimmune Nephritis

    Directory of Open Access Journals (Sweden)

    Roxana Ramírez-Sandoval

    2015-01-01

    Full Text Available Autoimmune nephritis triggered by metallic ions was assessed in a Long-Evans rat model. The parameters evaluated included antinuclear autoantibody production, kidney damage mediated by immune complexes detected by immunofluorescence, and renal function tested by retention of nitrogen waste products and proteinuria. To accomplish our goal, the animals were treated with the following ionic metals: HgCl2, CuSO4, AgNO3, and Pb(NO32. A group without ionic metals was used as the control. The results of the present investigation demonstrated that metallic ions triggered antinuclear antibody production in 60% of animals, some of them with anti-DNA specificity. Furthermore, all animals treated with heavy metals developed toxic glomerulonephritis with immune complex deposition along the mesangium and membranes. These phenomena were accompanied by proteinuria and increased concentrations of urea. Based on these results, we conclude that metallic ions may induce experimental autoimmune nephritis.

  7. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  8. Biomechanical model produced from light-activated dental composite resins: a holographic analysis

    Science.gov (United States)

    Pantelić, Dejan; Vasiljević, Darko; Blažić, Larisa; Savić-Šević, Svetlana; Murić, Branka; Nikolić, Marko

    2013-11-01

    Light-activated dental composites, commonly applied in dentistry, can be used as excellent material for producing biomechanical models. They can be cast in almost any shape in an appropriate silicone mold and quickly solidified by irradiation with light in the blue part of the spectrum. In that way, it is possible to obtain any number of nearly identical casts. The models can be used to study the behavior of arbitrary structure under mechanical loads. To test the technique, a simple mechanical model of the tooth with a mesio-occluso-distal cavity was manufactured. Composite resin restoration was placed inside the cavity and light cured. Real-time holographic interferometry was used to analyze the contraction of the composite resin and its effect on the surrounding material. The results obtained in the holographic experiment were in good agreement with those obtained using the finite element method.

  9. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  10. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  11. Real-time measurement of materials properties at high temperatures by laser produced plasmas

    Science.gov (United States)

    Kim, Yong W.

    1990-01-01

    Determination of elemental composition and thermophysical properties of materials at high temperatures, as visualized in the context of containerless materials processing in a microgravity environment, presents a variety of unusual requirements owing to the thermal hazards and interferences from electromagnetic control fields. In addition, such information is intended for process control applications and thus the measurements must be real time in nature. A new technique is described which was developed for real time, in-situ determination of the elemental composition of molten metallic alloys such as specialty steel. The technique is based on time-resolved spectroscopy of a laser produced plasma (LPP) plume resulting from the interaction of a giant laser pulse with a material target. The sensitivity and precision were demonstrated to be comparable to, or better than, the conventional methods of analysis which are applicable only to post-mortem specimens sampled from a molten metal pool. The LPP technique can be applied widely to other materials composition analysis applications. The LPP technique is extremely information rich and therefore provides opportunities for extracting other physical properties in addition to the materials composition. The case in point is that it is possible to determine thermophysical properties of the target materials at high temperatures by monitoring generation and transport of acoustic pulses as well as a number of other fluid-dynamic processes triggered by the LPP event. By manipulation of the scaling properties of the laser-matter interaction, many different kinds of flow events, ranging from shock waves to surface waves to flow induced instabilities, can be generated in a controllable manner. Time-resolved detection of these events can lead to such thermophysical quantities as volume and shear viscosities, thermal conductivity, specific heat, mass density, and others.

  12. Quantitative Real-time PCR detection of putrescine-producing Gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    Kristýna Maršálková

    2017-01-01

    Full Text Available Biogenic amines are indispensable components of living cells; nevertheless these compounds could be toxic for human health in higher concentrations. Putrescine is supposed to be the major biogenic amine associated with microbial food spoilage. Development of reliable, fast and culture-independent molecular methods to detect bacteria producing biogenic amines deserves the attention, especially of the food industry in purpose to protect health. The objective of this study was to verify the newly designed primer sets for detection of two inducible genes adiA and speF together in Salmonella enterica and Escherichia coli genome by Real-time PCR. These forenamed genes encode enzymes in the metabolic pathway which leads to production of putrescine in Gram-negative bacteria. Moreover, relative expression of these genes was studied in E. coli CCM 3954 strain using Real-time PCR. In this study, sets of new primers for the detection two inducible genes (speF and adiA in Salmonella enterica and E. coli by Real-time PCR were designed and tested. Amplification efficiency of a Real-time PCR was calculated from the slope of the standard curves (adiA, speF, gapA. An efficiency in a range from 95 to 105 % for all tested reactions was achieved. The gene expression (R of adiA and speF genes in E. coli was varied depending on culture conditions. The highest gene expression of adiA and speF was observed at 6, 24 and 36 h (RadiA ~ 3, 5, 9; RspeF ~11, 10, 9; respectively after initiation of growth of this bacteria in nutrient broth medium enchired with amino acids. The results show that these primers could be used for relative quantification analysis of E. coli.

  13. Discrete-time rewards model-checked

    NARCIS (Netherlands)

    Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.

    2003-01-01

    This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and

  14. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  15. Timed Model Checking of Security Protocols

    NARCIS (Netherlands)

    Corin, R.J.; Etalle, Sandro; Hartel, Pieter H.; Mader, Angelika H.

    We propose a method for engineering security protocols that are aware of timing aspects. We study a simplified version of the well-known Needham Schroeder protocol and the complete Yahalom protocol. Timing information allows the study of different attack scenarios. We illustrate the attacks by model

  16. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  17. Modeling preference time in middle distance triathlons

    OpenAIRE

    Fister, Iztok; Iglesias, Andres; Deb, Suash; Fister, Dušan; Fister Jr, Iztok

    2017-01-01

    Modeling preference time in triathlons means predicting the intermediate times of particular sports disciplines by a given overall finish time in a specific triathlon course for the athlete with the known personal best result. This is a hard task for athletes and sport trainers due to a lot of different factors that need to be taken into account, e.g., athlete's abilities, health, mental preparations and even their current sports form. So far, this process was calculated manually without any ...

  18. Discrete-time modelling of musical instruments

    International Nuclear Information System (INIS)

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  19. Real-time individualization of the unified model of performance.

    Science.gov (United States)

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  20. Time course of virulence factors produced by group A streptococcus during a food-borne epidemic.

    Science.gov (United States)

    Kanno, Takeshi; Sakaguchi, Kazuko; Suzuki, Jun

    2012-02-01

    We studied the protein amount and activity of the major virulence factors hemolysin, cysteine protease streptococcal pyrogenic exotoxin B (SpeB), and NAD glycohydrolase (NADase), which are produced by Streptococcus pyogenes type T-25, with a food poisoning outbreak. The three virulence factors were analyzed by activity and amount of protein using supernatants at 2-30 h of culture. All these virulence factors were confirmed by their activity. Streptolysin O (SLO), SpeB, and NADase were immunochemically confirmed at protein level by Western blot analysis. Two hemolytic forms (70 and 60 kDa) of SLO were identified. SpeB was detected as a 44-kDa precursor form and a 30-kDa mature form. NADase was 50 kDa. SLO protein peaked at 8 h of culture, which corresponded with the hemolytic activity peak. Conversion from precursor to SpeB protein peaked at 14 h of culture. The conversion peak corresponded to the activity expression time. Also, mature SpeB protein peaked at 24 h of culture and corresponded to SpeB activity peak. Electrophoretic analysis clarified the relationship between SLO protein and SpeB protein, although amounts of SLO and SpeB have been reported to be inversely proportional to activity. NADase protein peaked at 12 h of culture, but protein level did not correspond to the peak. Because the NADase protein peak was closer to SpeB activity than SLO protein, our results suggested NADase protein was degraded at 12 h of culture. The time course production of these virulence factors is discussed.

  1. Adderall produces increased striatal dopamine release and a prolonged time course compared to amphetamine isomers.

    Science.gov (United States)

    Joyce, B Matthew; Glaser, Paul E A; Gerhardt, Greg A

    2007-04-01

    Adderall is currently used for the treatment of Attention-Deficit Hyperactivity Disorder (ADHD) and is composed of a novel mixture of approximately 24% L-amphetamine and 76% D-amphetamine salts. There are, however, no investigations of the pharmacological effects of this combination in vivo. The technique of high-speed chronoamperometry using Nafion-coated single carbon-fiber microelectrodes was used to study amphetamine-evoked dopamine (DA) release produced by Adderall, D-amphetamine, or D,L-amphetamine in the striatum of anesthetized male Fischer 344 (F344) rats. The amphetamine solutions were locally applied from micropipettes by pressure ejection. Local applications of Adderall resulted in significantly greater DA release signal amplitudes with prolonged time course of dopamine release and re-uptake as compared to D-amphetamine and D,L-amphetamine. These data support the hypothesis that the combination of amphetamine enantiomers and salts in Adderall has effects on DA release, which result in increased and prolonged DA release, compared to D- and D,L-amphetamine.

  2. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  3. Historical series and Near Real Time data analysis produced within ASI-SRV project infrastructures

    Science.gov (United States)

    Silvestri, M.; Musacchio, M.; Buongiorno, M.; Corradini, S.; Lombardo, V.; Merucci, L.; Spinetti, C.; Sansosti, E.; Pugnaghi, S.; Teggi, S.; Vignoli, S.; Amodio, A.; Dini, L.

    2009-12-01

    ASI-Sistema Rischio Vulcanico (SRV) project is devoted to the development of a pre-operative integrated system managing different Earth Observation (EO) and Non EO data to respond to specific needs of the Italian Civil Protection Department (DPC) and improve the monitoring of Italian active volcanoes. The project provides the capability to maintain a repository where the acquired data are stored and generates products offering a support to risk managers during the different volcanic activity phases. All the products are obtained considering technical choices and developments of ASI-SRV based on flexible and scalable modules which take into account also the new coming space sensors and new processing algorithms. An important step of the project development regards the technical and scientific feasibility of the provided products that depends on the data availability, accuracy algorithms and models used in the processing and of course the possibility to validate the results by means of comparison with non-EO independent measurements. The ASI-SRV infrastrucutre is based on a distributed client/server architecture which implies that different processors need to ingest data set characterized by a constant and common structure. ASI-SRV will develop, in its final version, a centralized HW-SW system located at INGV which will control two complete processing chains, one located at INGV for Optical data, and the other located at IREA for SAR data. The produced results will be disseminated through a WEB-GIS interface which will allow the DPC to overview and assimilate the products in a compatible format respect to their local monitoring system in order to have an immediate use of the provided information. In this paper the first results producing ground deformation measurement via Differential Interferometric SAR (DInSAR) techniques by using SAR data and via the application of the Small BAseline Subset (SBAS) technique developed at IREA, are reported. Moreover different

  4. Time dependent viscous string cloud cosmological models

    Science.gov (United States)

    Tripathy, S. K.; Nayak, S. K.; Sahu, S. K.; Routray, T. R.

    2009-09-01

    Bianchi type-I string cosmological models are studied in Saez-Ballester theory of gravitation when the source for the energy momentum tensor is a viscous string cloud coupled to gravitational field. The bulk viscosity is assumed to vary with time and is related to the scalar expansion. The relationship between the proper energy density ρ and string tension density λ are investigated from two different cosmological models.

  5. A real time hyperelastic tissue model.

    Science.gov (United States)

    Zhong, Hualiang; Peters, Terry

    2007-06-01

    Real-time soft tissue modeling has a potential application in medical training, procedure planning and image-guided therapy. This paper characterizes the mechanical properties of organ tissue using a hyperelastic material model, an approach which is then incorporated into a real-time finite element framework. While generalizable, in this paper we use the published mechanical properties of pig liver to characterize an example application. Specifically, we calibrate the parameters of an exponential model, with a least-squares method (LSM) using the assumption that the material is isotropic and incompressible in a uniaxial compression test. From the parameters obtained, the stress-strain curves generated from the LSM are compared to those from the corresponding computational model solved by ABAQUS and also to experimental data, resulting in mean errors of 1.9 and 4.8%, respectively, which are considerably better than those obtained when employing the Neo-Hookean model. We demonstrate our approach through the simulation of a biopsy procedure, employing a tetrahedral mesh representation of human liver generated from a CT image. Using the material properties along with the geometric model, we develop a nonlinear finite element framework to simulate the behaviour of liver during an interventional procedure with a real-time performance achieved through the use of an interpolation approach.

  6. Inflation-produced magnetic fields in RnF2 and IF2 models

    International Nuclear Information System (INIS)

    Campanelli, L.; Cea, P.; Fogli, G. L.; Tedesco, L.

    2008-01-01

    We reanalyze the production of seed magnetic fields during inflation in (R/m 2 ) n F μν F μν and IF μν F μν models, where n is a positive integer, R the Ricci scalar, m a mass parameter, and I∝η α a power-law function of the conformal time η, with α a positive real number. If m is the electron mass, the produced fields are uninterestingly small for all n. Taking m as a free parameter, we find that, for n≥2, the produced magnetic fields can be sufficiently strong in order to seed the dynamo mechanism and then to explain galactic magnetism. For α > or approx. 2, there is always a window in the parameters defining inflation such that the generated magnetic fields are astrophysically interesting. Moreover, if inflation is (almost) de Sitter and the produced fields almost scale invariant (α≅4), their intensity can be strong enough to directly explain the presence of microgauss galactic magnetic fields

  7. A stochastic space-time model for intermittent precipitation occurrences

    KAUST Repository

    Sun, Ying

    2016-01-28

    Modeling a precipitation field is challenging due to its intermittent and highly scale-dependent nature. Motivated by the features of high-frequency precipitation data from a network of rain gauges, we propose a threshold space-time t random field (tRF) model for 15-minute precipitation occurrences. This model is constructed through a space-time Gaussian random field (GRF) with random scaling varying along time or space and time. It can be viewed as a generalization of the purely spatial tRF, and has a hierarchical representation that allows for Bayesian interpretation. Developing appropriate tools for evaluating precipitation models is a crucial part of the model-building process, and we focus on evaluating whether models can produce the observed conditional dry and rain probabilities given that some set of neighboring sites all have rain or all have no rain. These conditional probabilities show that the proposed space-time model has noticeable improvements in some characteristics of joint rainfall occurrences for the data we have considered.

  8. A Monte Carlo model to produce baryons in e+e- annihilation

    International Nuclear Information System (INIS)

    Meyer, T.

    1981-08-01

    A simple model is described extending the Field-Feynman model to baryon production in quark fragmentation. The model predicts baryon baryon correlations within jets and in opposite jets produced in electron-positron annihilation. Existing data is well described by the model. (orig.)

  9. Real-time modeling of heat distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  10. Applicability of Simplified Simulation Models for Perforation-Mediated Modified Atmosphere Packaging of Fresh Produce

    Directory of Open Access Journals (Sweden)

    Min-Ji Kwon

    2013-01-01

    Full Text Available The comprehensive mass balances of differential equations involving gas diffusion and hydraulic convection through package perforation, gas permeation through polymeric film, and produce respiration have commonly been used to predict the atmosphere of perforated fresh produce packages. However, the predictions often suffer from instability, and to circumvent this problem, a simplified diffusion model that omits the convective gas transfer and empirical models based on experimental mass transfer data have been developed and investigated previously by several researchers. This study investigated the potential and limitations of the simplified diffusion model and two empirical models for predicting the atmosphere in perforated produce packages. The simplified diffusion model satisfactorily estimated the atmosphere inside the perforated packages of fresh produce under the aerobic conditions examined. Published empirical models of the mass transfer coefficients of the perforation seem to be valid only for the measured conditions and thus should be used carefully for that specific purpose.

  11. A COMPARISON OF THE TENSILE STRENGTH OF PLASTIC PARTS PRODUCED BY A FUSED DEPOSITION MODELING DEVICE

    OpenAIRE

    Juraj Beniak; Peter Križan; Miloš Matúš

    2015-01-01

    Rapid Prototyping systems are nowadays increasingly used in many areas of industry, not only for producing design models but also for producing parts for final use. We need to know the properties of these parts. When we talk about the Fused Deposition Modeling (FDM) technique and FDM devices, there are many possible settings for devices and models which could influence the properties of a final part. In addition, devices based on the same principle may use different operational software for c...

  12. Space-time modeling of timber prices

    Science.gov (United States)

    Mo Zhou; Joseph Buongriorno

    2006-01-01

    A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...

  13. A Search for the Standard Model Higgs Boson Produced in Association with a $W$ Boson

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Martin Johannes [Baylor Univ., Waco, TX (United States)

    2011-05-01

    We present a search for a standard model Higgs boson produced in association with a W boson using data collected with the CDF II detector from p$\\bar{p}$ collisions at √s = 1.96 TeV. The search is performed in the WH → ℓvb$\\bar{b}$ channel. The two quarks usually fragment into two jets, but sometimes a third jet can be produced via gluon radiation, so we have increased the standard two-jet sample by including events that contain three jets. We reconstruct the Higgs boson using two or three jets depending on the kinematics of the event. We find an improvement in our search sensitivity using the larger sample together with this multijet reconstruction technique. Our data show no evidence of a Higgs boson, so we set 95% confidence level upper limits on the WH production rate. We set limits between 3.36 and 28.7 times the standard model prediction for Higgs boson masses ranging from 100 to 150 GeV/c2.

  14. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  15. Time-dependent intranuclear cascade model

    International Nuclear Information System (INIS)

    Barashenkov, V.S.; Kostenko, B.F.; Zadorogny, A.M.

    1980-01-01

    An intranuclear cascade model with explicit consideration of the time coordinate in the Monte Carlo simulation of the development of a cascade particle shower has been considered. Calculations have been performed using a diffuse nuclear boundary without any step approximation of the density distribution. Changes in the properties of the target nucleus during the cascade development have been taken into account. The results of these calculations have been compared with experiment and with the data which had been obtained by means of a time-independent cascade model. The consideration of time improved agreement between experiment and theory particularly for high-energy shower particles; however, for low-energy cascade particles (with grey and black tracks in photoemulsion) a discrepancy remains at T >= 10 GeV. (orig.)

  16. Estimation of biogas produced by the landfill of Palermo, applying a Gaussian model.

    Science.gov (United States)

    Aronica, S; Bonanno, A; Piazza, V; Pignato, L; Trapani, S

    2009-01-01

    In this work, a procedure is suggested to assess the rate of biogas emitted by the Bellolampo landfill (Palermo, Italy), starting from the data acquired by two of the stations for monitoring meteorological parameters and polluting gases. The data used refer to the period November 2005-July 2006. The methane concentration, measured in the CEP suburb of Palermo, has been analysed together with the meteorological data collected by the station situated inside the landfill area. In the present study, the methane has been chosen as a tracer of the atmospheric pollutants produced by the dump. The data used for assessing the biogas emission refer to night time periods characterized by weak wind blowing from the hill toward the city. The methane rate emitted by the Bellolampo dump has been evaluated using a Gaussian model and considering the landfill both as a single point source and as a multiple point one. The comparison of the results shows that for a first approximation it is sufficient to consider the landfill of Palermo as a single point source. Starting from the monthly percentage composition of the biogas, estimated for the study period, the rate of biogas produced by the dump was evaluated. The total biogas produced by the landfill, obtained as the sum of the emitted component and the recovered one, ranged from 7519.97 to 10,153.7m3/h. For the study period the average monthly estimations of biogas emissions into the atmosphere amount to about 60% of the total biogas produced by the landfill, a little higher than the one estimated by the company responsible for the biogas recovery plant at the landfill.

  17. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  18. High Emergence of ESBL-Producing E. coli Cystitis: Time to Get Smarter in Cyprus.

    Science.gov (United States)

    Cantas, Leon; Suer, Kaya; Guler, Emrah; Imir, Turgut

    2015-01-01

    Widespread prevalence of extended-spectrum βeta-lactamase producing Escherichia coli (ESBL-producing E. coli) limits the infection therapeutic options and is a growing global health problem. In this study our aim was to investigate the antimicrobial resistance profile of the E. coli in hospitalized and out-patients in Cyprus. During the period 2010-2014, 389 strains of E. coli were isolated from urine samples of hospitalized and out-patients in Cyprus. ESBL-producing E. coli, was observed in 53% of hospitalized and 44% in out-patients, latest one being in 2014. All ESBL-producing E. coli remained susceptible to amikacin, carbapenems except ertapenem (in-patients = 6%, out-patients = 11%). High emerging ESBL-producing E. coli from urine samples in hospitalized and out-patients is an extremely worrisome sign of development of untreatable infections in the near future on the island. We therefore emphasize the immediate need for establishment of optimal therapy guidelines based on the country specific surveillance programs. The need for new treatment strategies, urgent prescription habit changes and ban of over-the-counter sale of antimicrobials at each segment of healthcare services is also discussed in this research.

  19. Modeling and Understanding Time-Evolving Scenarios

    Directory of Open Access Journals (Sweden)

    Riccardo Melen

    2015-08-01

    Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.

  20. Discrete time modelization of human pilot behavior

    Science.gov (United States)

    Cavalli, D.; Soulatges, D.

    1975-01-01

    This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.

  1. Linear Parametric Model Checking of Timed Automata

    DEFF Research Database (Denmark)

    Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle

    2001-01-01

    We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...

  2. A seasonal model of contracts between a monopsonistic processor and smallholder pepper producers in Costa Rica

    NARCIS (Netherlands)

    Sáenz Segura, F.; Haese, D' M.F.C.; Schipper, R.A.

    2010-01-01

    We model the contractual arrangements between smallholder pepper (Piper nigrum L.) producers and a single processor in Costa Rica. Producers in the El Roble settlement sell their pepper to only one processing firm, which exerts its monopsonistic bargaining power by setting the purchase price of

  3. Modelling of Patterns in Space and Time

    CERN Document Server

    Murray, James

    1984-01-01

    This volume contains a selection of papers presented at the work­ shop "Modelling of Patterns in Space and Time", organized by the 80nderforschungsbereich 123, "8tochastische Mathematische Modelle", in Heidelberg, July 4-8, 1983. The main aim of this workshop was to bring together physicists, chemists, biologists and mathematicians for an exchange of ideas and results in modelling patterns. Since the mathe­ matical problems arising depend only partially on the particular field of applications the interdisciplinary cooperation proved very useful. The workshop mainly treated phenomena showing spatial structures. The special areas covered were morphogenesis, growth in cell cultures, competition systems, structured populations, chemotaxis, chemical precipitation, space-time oscillations in chemical reactors, patterns in flames and fluids and mathematical methods. The discussions between experimentalists and theoreticians were especially interesting and effective. The editors hope that these proceedings reflect ...

  4. FOOTPRINT: A Screening Model for Estimating the Area of a Plume Produced From Gasoline Containing Ethanol

    Science.gov (United States)

    FOOTPRINT is a screening model used to estimate the length and surface area of benzene, toluene, ethylbenzene, and xylene (BTEX) plumes in groundwater, produced from a gasoline spill that contains ethanol.

  5. Axial model in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Barcelos-Neto, J.; Farina, C.; Vaidya, A.N.

    1986-12-11

    We study the axial model in a background gravitational field. Using the zeta-function regularization, we obtain explicitly the anomalous divergence of the axial-vector current and the exact generating functional of the theory. We show that, as a consequence of a space-time-dependent metric, all differential equations involved in the theory generalize to their covariantized forms. We also comment on the finite-mass renormalization exhibited by the pseudoscalar field and the form of the fermion propagator.

  6. Time series modeling for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Mandl Kenneth D

    2003-01-01

    Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool

  7. Effects of Storage Time on the Stability of Tomato Puree Produced ...

    African Journals Online (AJOL)

    Tomato puree was produced through the hot-break method from Derica variety of tomato, packaged in high density polyethylene and stored at ambient temperature (32±20C). The stored puree was analysed weekly for physicochemical composition, sensory attributes and microbial load until it became unwholesome.

  8. Modeling utilization distributions in space and time

    Science.gov (United States)

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  9. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  10. Multi-model cross-pollination in time

    Science.gov (United States)

    Du, Hailiang; Smith, Leonard A.

    2017-09-01

    The predictive skill of complex models is rarely uniform in model-state space; in weather forecasting models, for example, the skill of the model can be greater in the regions of most interest to a particular operational agency than it is in ;remote; regions of the globe. Given a collection of models, a multi-model forecast system using the cross-pollination in time approach can be generalized to take advantage of instances where some models produce forecasts with more information regarding specific components of the model-state than other models, systematically. This generalization is stated and then successfully demonstrated in a moderate (∼ 40) dimensional nonlinear dynamical system, suggested by Lorenz, using four imperfect models with similar global forecast skill. Applications to weather forecasting and in economic forecasting are discussed. Given that the relative importance of different phenomena in shaping the weather changes in latitude, changes in attitude among forecast centers in terms of the resources assigned to each phenomena are to be expected. The demonstration establishes that cross-pollinating elements of forecast trajectories enriches the collection of simulations upon which the forecast is built, and given the same collection of models can yield a new forecast system with significantly more skill than the original forecast system.

  11. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  12. The manifold model for space-time

    International Nuclear Information System (INIS)

    Heller, M.

    1981-01-01

    Physical processes happen on a space-time arena. It turns out that all contemporary macroscopic physical theories presuppose a common mathematical model for this arena, the so-called manifold model of space-time. The first part of study is an heuristic introduction to the concept of a smooth manifold, starting with the intuitively more clear concepts of a curve and a surface in the Euclidean space. In the second part the definitions of the Csub(infinity) manifold and of certain structures, which arise in a natural way from the manifold concept, are given. The role of the enveloping Euclidean space (i.e. of the Euclidean space appearing in the manifold definition) in these definitions is stressed. The Euclidean character of the enveloping space induces to the manifold local Euclidean (topological and differential) properties. A suggestion is made that replacing the enveloping Euclidean space by a discrete non-Euclidean space would be a correct way towards the quantization of space-time. (author)

  13. RTMOD: Real-Time MODel evaluation

    DEFF Research Database (Denmark)

    Graziani, G.; Galmarini, S.; Mikkelsen, Torben

    2000-01-01

    the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results....... At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... during the ETEX exercises suggested the development of this project. RTMOD featured a web-baseduser-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration...

  14. Time Modeling: Salvatore Sciarrino, Windows and Beclouding

    Directory of Open Access Journals (Sweden)

    Acácio Tadeu de Camargo Piedade

    2017-08-01

    Full Text Available In this article I intend to discuss one of the figures created by the Italian composer Salvatore Sciarrino: the windowed form. After the composer's explanation of this figure, I argue that windows in composition can open inwards and outwards the musical discourse. On one side, they point to the composition's inner ambiences and constitute an internal remission. On the other, they instigate the audience to comprehend the external reference, thereby constructing intertextuality. After the outward window form, I will consider some techniques of distortion, particularly one that I call beclouding. To conclude, I will comment the question of memory and of compostition as time modeling.

  15. A generic model for keeping quality of vegetable produce during storage and distribution

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Polderdijk, J.J.

    1996-01-01

    A generic model on the keeping quality of perishable produce was formulated, based on the kinetics of the decrease of individual quality attributes. The model includes the effects of temperature, chilling injury and different levels of initial quality and of quality acceptance limits. Keeping

  16. A novel methodology to model the cooling processes of packed horticultural produce using 3D shape models

    Science.gov (United States)

    Gruyters, Willem; Verboven, Pieter; Rogge, Seppe; Vanmaercke, Simon; Ramon, Herman; Nicolai, Bart

    2017-10-01

    Freshly harvested horticultural produce require a proper temperature management to maintain their high economic value. Towards this end, low temperature storage is of crucial importance to maintain a high product quality. Optimizing both the package design of packed produce and the different steps in the postharvest cold chain can be achieved by numerical modelling of the relevant transport phenomena. This work presents a novel methodology to accurately model both the random filling of produce in a package and the subsequent cooling process. First, a cultivar-specific database of more than 100 realistic CAD models of apple and pear fruit is built with a validated geometrical 3D shape model generator. To have an accurate representation of a realistic picking season, the model generator also takes into account the biological variability of the produce shape. Next, a discrete element model (DEM) randomly chooses surface meshed bodies from the database to simulate the gravitational filling process of produce in a box or bin, using actual mechanical properties of the fruit. A computational fluid dynamics (CFD) model is then developed with the final stacking arrangement of the produce to study the cooling efficiency of packages under several conditions and configurations. Here, a typical precooling operation is simulated to demonstrate the large differences between using actual 3D shapes of the fruit and an equivalent spheres approach that simplifies the problem drastically. From this study, it is concluded that using a simplified representation of the actual fruit shape may lead to a severe overestimation of the cooling behaviour.

  17. A COMPARISON OF THE TENSILE STRENGTH OF PLASTIC PARTS PRODUCED BY A FUSED DEPOSITION MODELING DEVICE

    Directory of Open Access Journals (Sweden)

    Juraj Beniak

    2015-12-01

    Full Text Available Rapid Prototyping systems are nowadays increasingly used in many areas of industry, not only for producing design models but also for producing parts for final use. We need to know the properties of these parts. When we talk about the Fused Deposition Modeling (FDM technique and FDM devices, there are many possible settings for devices and models which could influence the properties of a final part. In addition, devices based on the same principle may use different operational software for calculating the tool path, and this may have a major impact. The aim of this paper is to show the tensile strength value for parts produced from different materials on the Fused Deposition Modeling device when the horizontal orientation of the specimens is changed.

  18. Initial activation state, stimulation intensity and timing of stimulation interact in producing behavioral effects of TMS

    OpenAIRE

    Silvanto, Juha; Bona, Silvia; Cattaneo, Zaira

    2017-01-01

    Behavioral effects of transcranial magnetic stimulation (TMS) have been shown to depend on various factors, such as neural activation state, stimulation intensity, and timing of stimulation. Here we examined whether these factors interact, by applying TMS at either sub- or suprathreshold intensity (relative to phosphene threshold, PT) and at different time points during a state-dependent TMS paradigm. The state manipulation involved a behavioral task in which a visual prime (color grating) wa...

  19. A Parametric Sizing Model for Molten Regolith Electrolysis Reactors to Produce Oxygen from Lunar Regolith

    Science.gov (United States)

    Schreiner, Samuel S.; Dominguez, Jesus A.; Sibille, Laurent; Hoffman, Jeffrey A.

    2015-01-01

    We present a parametric sizing model for a Molten Electrolysis Reactor that produces oxygen and molten metals from lunar regolith. The model has a foundation of regolith material properties validated using data from Apollo samples and simulants. A multiphysics simulation of an MRE reactor is developed and leveraged to generate a vast database of reactor performance and design trends. A novel design methodology is created which utilizes this database to parametrically design an MRE reactor that 1) can sustain the required mass of molten regolith, current, and operating temperature to meet the desired oxygen production level, 2) can operate for long durations via joule heated, cold wall operation in which molten regolith does not touch the reactor side walls, 3) can support a range of electrode separations to enable operational flexibility. Mass, power, and performance estimates for an MRE reactor are presented for a range of oxygen production levels. The effects of several design variables are explored, including operating temperature, regolith type/composition, batch time, and the degree of operational flexibility.

  20. A triangular model of dimensionless runoff producing rainfall hyetographs in Texas

    Science.gov (United States)

    Asquith, W.H.; Bumgarner, J.R.; Fahlquist, L.S.

    2003-01-01

    A synthetic triangular hyetograph for a large data base of Texas rainfall and runoff is needed. A hyetograph represents the temporal distribution of rainfall intensity at a point or over a watershed during a storm. Synthetic hyetographs are estimates of the expected time distribution for a design storm and principally are used in small watershed hydraulic structure design. A data base of more than 1,600 observed cumulative hyetographs that produced runoff from 91 small watersheds (generally less than about 50 km2) was used to provide statistical parameters for a simple triangular shaped hyetograph model. The model provides an estimate of the average hyetograph in dimensionless form for storm durations of 0 to 24 hours and 24 to 72 hours. As a result of this study, the authors concluded that the expected dimensionless cumulative hyetographs of 0 to 12 hour and 12 to 24 hour durations were sufficiently similar to be combined with minimal information loss. The analysis also suggests that dimensionless cumulative hyetographs are independent of the frequency level or return period of total storm depth and thus are readily used for many design applications. The two triangular hyetographs presented are intended to enhance small watershed design practice in applicable parts of Texas.

  1. Effects of build parameters on linear wear loss in plastic part produced by fused deposition modeling

    Science.gov (United States)

    Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal

    2017-07-01

    Fused Deposition Modeling (FDM) is one of the prominent additive manufacturing technologies for producing polymer products. FDM is a complex additive manufacturing process that can be influenced by many process conditions. The industrial demands required from the FDM process are increasing with higher level product functionality and properties. The functionality and performance of FDM manufactured parts are greatly influenced by the combination of many various FDM process parameters. Designers and researchers always pay attention to study the effects of FDM process parameters on different product functionalities and properties such as mechanical strength, surface quality, dimensional accuracy, build time and material consumption. However, very limited studies have been carried out to investigate and optimize the effect of FDM build parameters on wear performance. This study focuses on the effect of different build parameters on micro-structural and wear performance of FDM specimens using definitive screening design based quadratic model. This would reduce the cost and effort of additive manufacturing engineer to have a systematic approachto make decision among the manufacturing parameters to achieve the desired product quality.

  2. Development of a multiplex real-time PCR to quantify aflatoxin, ochratoxin A and patulin producing molds in foods.

    Science.gov (United States)

    Rodríguez, Alicia; Rodríguez, Mar; Andrade, María J; Córdoba, Juan J

    2012-04-02

    A multiplex real-time PCR (qPCR) method to quantify aflatoxin, ochratoxin A (OTA) and patulin producing molds in foods was developed. For this, the primer pairs F/R-omt, F/R-npstr and F/R-idhtrb and the TaqMan probes, OMTprobe, NPSprobe and IDHprobe targeting the omt-1, otanpsPN and idh genes involved in aflatoxin, OTA and patulin biosynthesis, respectively, were used. The functionality of the developed qPCR method was demonstrated by the high linear relationship of the standard curves constructed with the omt-1, otanpsPN and idh gene copies and threshold cycle (Ct) values for the respective producing molds tested to quantify aflatoxin, OTA and patulin producing molds. The ability of the optimized qPCR protocol to quantify producing molds was evaluated in different artificially inoculated foods (fruits, nuts, cereals and dry-ripened meat and cheese products). Efficiency values ranged from 81 to 110% in all inoculated foods. The detection limit was between 3 and 1logcfu/g for aflatoxin, OTA and patulin producing molds. The developed multiplex qPCR was shown be an appropriate tool for sensitive quantification of growth of toxigenic fungi in foods throughout the incubation time. Thus, the multiplex qPCR is a useful, rapid and efficient method to quantify simultaneously aflatoxin, OTA and patulin producing molds in food products. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Striatal lesions produce distinctive impairments in reaction time performance in two different operant chambers.

    Science.gov (United States)

    Brasted, P J; Döbrössy, M D; Robbins, T W; Dunnett, S B

    1998-08-01

    The dorsal striatum plays a crucial role in mediating voluntary movement. Excitotoxic striatal lesions in rats have previously been shown to impair the initiation but not the execution of movement in a choice reaction time task in an automated lateralised nose-poke apparatus (the "nine-hole box"). Conversely, when a conceptually similar reaction time task has been applied in a conventional operant chamber (or "Skinner box"), striatal lesions have been seen to impair the execution rather than the initiation of the lateralised movement. The present study was undertaken to compare directly these two results by training the same group of rats to perform a choice reaction time task in the two chambers and then comparing the effects of a unilateral excitotoxic striatal lesion in both chambers in parallel. Particular attention was paid to adopting similar parameters and contingencies in the control of the task in the two test chambers. After striatal lesions, the rats showed predominantly contralateral impairments in both tasks. However, they showed a deficit in reaction time in the nine-hole box but an apparent deficit in response execution in the Skinner box. This finding confirms the previous studies and indicates that differences in outcome are not simply attributable to procedural differences in the lesions, training conditions or tasks parameters. Rather, the pattern of reaction time deficit after striatal lesions depends critically on the apparatus used and the precise response requirements for each task.

  4. A MODEL FOR PRODUCING STABLE, BROADBAND TERAHERTZ COHERENT SYNCHROTRON RADIATION IN STORAGE RINGS

    International Nuclear Information System (INIS)

    Sannibale, Fernando; Byrd, John M.; Loftsdottir, Agusta; Martin, MichaelC.; Venturini, Marco

    2003-01-01

    We present a model for producing stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The model includes distortion of bunch shape from the synchrotron radiation (SR), enhancing higher frequency coherent emission and limits to stable emission due to a microbunching instability excited by the SR. We use this model to optimize the performance of a source for CSR emission

  5. Time-resolved x-ray line diagnostics of laser-produced plasmas

    International Nuclear Information System (INIS)

    Kauffman, R.L.; Matthews, D.L.; Kilkenny, J.D.; Lee, R.W.

    1982-01-01

    We have examined the underdense plasma conditions of laser irradiated disks using K x-rays from highly ionized ions. A 900 ps laser pulse of 0.532 μm light is used to irradiate various Z disks which have been doped with low concentrations of tracer materials. The tracers whose Z's range from 13 to 22 are chosen so that their K x-ray spectrum is sensitive to typical underdense plasma temperatures and densities. Spectra are measured using a time-resolved crystal spectrograph recording the time history of the x-ray spectrum. A spatially-resolved, time-integrated crystal spectrograph also monitors the x-ray lines. Large differences in Al spectra are observed when the host plasma is changed from SiO 2 to PbO or In. Spectra will be presented along with preliminary analysis of the data

  6. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  7. Producing near-real-time intelligence: predicting the world of tomorrow

    NARCIS (Netherlands)

    Barros, A.I.; Broek, A.C. van den; Dalen, J.A. van; Vecht, B. van der; Wevers, J.

    2014-01-01

    The complexity and dynamics of current military operations demand reliable and up-to-date intelligence and in particular near-real-time threat assessment. This paper explores the potential of operational analysis techniques in supporting military personnel in processing information from different

  8. Time-domain Simulations of the Acoustic Streaming Produced by a Propagating Wave Radiated by a Circular Piston

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco

    2013-01-01

    Results of numerical simulations of the sound field produced by a circular piston in a rigid baffled are presented. The aim was to calculate the acoustic streaming and the flow of mass generated by the sound field. For this purpose, the classical finite-difference time-domain method was implemented...

  9. The HTA core model: a novel method for producing and reporting health technology assessments

    DEFF Research Database (Denmark)

    Lampe, Kristian; Mäkelä, Marjukka; Garrido, Marcial Velasco

    2009-01-01

    OBJECTIVES: The aim of this study was to develop and test a generic framework to enable international collaboration for producing and sharing results of health technology assessments (HTAs). METHODS: Ten international teams constructed the HTA Core Model, dividing information contained....... The Model and Core HTAs were both validated. Guidance on the use of the HTA Core Model was compiled into a Handbook. RESULTS: The HTA Core Model considers health technologies through nine domains. Two applications of the Model were developed, one for medical and surgical interventions and another...... in a comprehensive HTA into standardized pieces, the assessment elements. Each element contains a generic issue that is translated into practical research questions while performing an assessment. Elements were described in detail in element cards. Two pilot assessments, designated as Core HTAs were also produced...

  10. Linear time domain model of the acoustic potential field.

    Science.gov (United States)

    Lesniewski, Peter J

    2002-08-01

    A new time domain formulation of the acoustic wave is developed to avoid approximating assumptions of the linearized scalar wave equation that limit its validity to low Mach particle velocity modeling or to a smooth potential field in a stationary medium. The proposed model offers precision of the moving frame while retaining the form of the widely used linearized scalar wave equation although with respect to modified coordinates. It is applicable to field calculations involving transient waves with unlimited particle velocity, propagating in inhomogenous fluids or in those with time varying density. The model is based on the exact flux continuity equation and the equation of motion, both using the moving reference frame. The resulting closed-form free space scalar wave equation employing total derivatives is converted back to the partial differential form by using modified independent variables. The modified variables are related to the common coordinates of space and time following integral expressions involving transient particle velocity representing wave radiated by each point of a stationary source. Consequently, transient field produced by complex surface velocity sources can be calculated following existing surface integrals of the radiation theory although using modified coordinates. The use of the proposed model is presented in a numerical simulation of a transient velocity source vibrating at selected magnitudes, leading to the determination of the propagating pressure and velocity wave at any point.

  11. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    OpenAIRE

    Rao, Yuhan; Zhu, Xiaolin; Chen, Jin; Wang, Jianmin

    2015-01-01

    Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal) NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM), is proposed to achieve the goal of accurately and efficiently bl...

  12. Experimental Verification of Modeled Thermal Distribution Produced by a Piston Source in Physiotherapy Ultrasound

    Directory of Open Access Journals (Sweden)

    M. I. Gutierrez

    2016-01-01

    Full Text Available Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%. Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions.

  13. Use of Real Time Satellite Infrared and Ocean Color to Produce Ocean Products

    Science.gov (United States)

    Roffer, M. A.; Muller-Karger, F. E.; Westhaver, D.; Gawlikowski, G.; Upton, M.; Hall, C.

    2014-12-01

    Real-time data products derived from infrared and ocean color satellites are useful for several types of users around the world. Highly relevant applications include recreational and commercial fisheries, commercial towing vessel and other maritime and navigation operations, and other scientific and applied marine research. Uses of the data include developing sampling strategies for research programs, tracking of water masses and ocean fronts, optimizing ship routes, evaluating water quality conditions (coastal, estuarine, oceanic), and developing fisheries and essential fish habitat indices. Important considerations for users are data access and delivery mechanisms, and data formats. At this time, the data are being generated in formats increasingly available on mobile computing platforms, and are delivered through popular interfaces including social media (Facebook, Linkedin, Twitter and others), Google Earth and other online Geographical Information Systems, or are simply distributed via subscription by email. We review 30 years of applications and describe how we develop customized products and delivery mechanisms working directly with users. We review benefits and issues of access to government databases (NOAA, NASA, ESA), standard data products, and the conversion to tailored products for our users. We discuss advantages of different product formats and of the platforms used to display and to manipulate the data.

  14. Identification of neutral biochemical network models from time series data

    Directory of Open Access Journals (Sweden)

    Maia Marco

    2009-05-01

    Full Text Available Abstract Background The major difficulty in modeling biological systems from multivariate time series is the identification of parameter sets that endow a model with dynamical behaviors sufficiently similar to the experimental data. Directly related to this parameter estimation issue is the task of identifying the structure and regulation of ill-characterized systems. Both tasks are simplified if the mathematical model is canonical, i.e., if it is constructed according to strict guidelines. Results In this report, we propose a method for the identification of admissible parameter sets of canonical S-systems from biological time series. The method is based on a Monte Carlo process that is combined with an improved version of our previous parameter optimization algorithm. The method maps the parameter space into the network space, which characterizes the connectivity among components, by creating an ensemble of decoupled S-system models that imitate the dynamical behavior of the time series with sufficient accuracy. The concept of sloppiness is revisited in the context of these S-system models with an exploration not only of different parameter sets that produce similar dynamical behaviors but also different network topologies that yield dynamical similarity. Conclusion The proposed parameter estimation methodology was applied to actual time series data from the glycolytic pathway of the bacterium Lactococcus lactis and led to ensembles of models with different network topologies. In parallel, the parameter optimization algorithm was applied to the same dynamical data upon imposing a pre-specified network topology derived from prior biological knowledge, and the results from both strategies were compared. The results suggest that the proposed method may serve as a powerful exploration tool for testing hypotheses and the design of new experiments.

  15. Identification of neutral biochemical network models from time series data.

    Science.gov (United States)

    Vilela, Marco; Vinga, Susana; Maia, Marco A Grivet Mattoso; Voit, Eberhard O; Almeida, Jonas S

    2009-05-05

    The major difficulty in modeling biological systems from multivariate time series is the identification of parameter sets that endow a model with dynamical behaviors sufficiently similar to the experimental data. Directly related to this parameter estimation issue is the task of identifying the structure and regulation of ill-characterized systems. Both tasks are simplified if the mathematical model is canonical, i.e., if it is constructed according to strict guidelines. In this report, we propose a method for the identification of admissible parameter sets of canonical S-systems from biological time series. The method is based on a Monte Carlo process that is combined with an improved version of our previous parameter optimization algorithm. The method maps the parameter space into the network space, which characterizes the connectivity among components, by creating an ensemble of decoupled S-system models that imitate the dynamical behavior of the time series with sufficient accuracy. The concept of sloppiness is revisited in the context of these S-system models with an exploration not only of different parameter sets that produce similar dynamical behaviors but also different network topologies that yield dynamical similarity. The proposed parameter estimation methodology was applied to actual time series data from the glycolytic pathway of the bacterium Lactococcus lactis and led to ensembles of models with different network topologies. In parallel, the parameter optimization algorithm was applied to the same dynamical data upon imposing a pre-specified network topology derived from prior biological knowledge, and the results from both strategies were compared. The results suggest that the proposed method may serve as a powerful exploration tool for testing hypotheses and the design of new experiments.

  16. Space-time resolved measurements of spontaneous magnetic fields in laser-produced plasma

    Czech Academy of Sciences Publication Activity Database

    Pisarczyk, T.; Gus’kov, S.Yu.; Dudžák, Roman; Chodukowski, T.; Dostál, Jan; Demchenko, N. N.; Korneev, Ph.; Kalinowska, Z.; Kalal, M.; Renner, Oldřich; Šmíd, Michal; Borodziuk, S.; Krouský, Eduard; Ullschmied, Jiří; Hřebíček, Jan; Medřík, Tomáš; Golasowski, Jiří; Pfeifer, Miroslav; Skála, Jiří; Pisarczyk, P.

    2015-01-01

    Roč. 22, č. 10 (2015), č. článku 102706. ISSN 1070-664X R&D Projects: GA MŠk LM2010014; GA MŠk(CZ) LD14089; GA ČR GPP205/11/P712 Grant - others:FP7(XE) 284464 Program:FP7 Institutional support: RVO:61389021 ; RVO:68378271 Keywords : space-time resolved spontaneous magnetic field (SMF) * Laser System Subject RIV: BL - Plasma and Gas Discharge Physics; BL - Plasma and Gas Discharge Physics (FZU-D) OBOR OECD: Fluids and plasma physics (including surface physics); Fluids and plasma physics (including surface physics) (FZU-D) Impact factor: 2.207, year: 2015 http://scitation.aip.org/content/aip/journal/pop/22/10/10.1063/1.4933364

  17. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  18. Comparison of dimensional accuracy of digital dental models produced from scanned impressions and scanned stone casts

    Science.gov (United States)

    Subeihi, Haitham

    Introduction: Digital models of dental arches play a more and more important role in dentistry. A digital dental model can be generated by directly scanning intraoral structures, by scanning a conventional impression of oral structures or by scanning a stone cast poured from the conventional impression. An accurate digital scan model is a fundamental part for the fabrication of dental restorations. Aims: 1. To compare the dimensional accuracy of digital dental models produced by scanning of impressions versus scanning of stone casts. 2. To compare the dimensional accuracy of digital dental models produced by scanning of impressions made of three different materials (polyvinyl siloxane, polyether or vinyl polyether silicone). Methods and Materials: This laboratory study included taking addition silicone, polyether and vinyl polyether silicone impressions from an epoxy reference model that was created from an original typodont. Teeth number 28 and 30 on the typodont with a missing tooth number 29 were prepared for a metal-ceramic three-unit fixed dental prosthesis with tooth #29 being a pontic. After tooth preparation, an epoxy resin reference model was fabricated by duplicating the typodont quadrant that included the tooth preparations. From this reference model 12 polyvinyl siloxane impressions, 12 polyether impressions and 12 vinyl polyether silicone impressions were made. All 36 impressions were scanned before pouring them with dental stone. The 36 dental stone casts were, in turn, scanned to produce digital models. A reference digital model was made by scanning the reference model. Six groups of digital models were produced. Three groups were made by scanning of the impressions obtained with the three different materials, the other three groups involved the scanning of the dental casts that resulted from pouring the impressions made with the three different materials. Groups of digital models were compared using Root Mean Squares (RMS) in terms of their

  19. MONITORING AND MODELLING OF AIR POLLUTION PRODUCED BY AIRCRAFT ENGINE EMISSION INSIDE THE ATHENS INTERNATIONAL AIRPORT

    Directory of Open Access Journals (Sweden)

    Oleksander I. Zaporozhets

    2009-04-01

    Full Text Available  Experimental measuring of air pollution inside the airport, produced by aircraft engine emission during accelaration and take-off on the runway. Measurement data were used for verification of modelling results according to complex model «PolEmiCa». It consists of the following basic components: engine emission inventory calculation; transport of the contaminants by engine jets, dispersion of the contaminants in atmosphere due to wind and atmospheric turbulence.

  20. The Influence of Variation in Time and HCl Concentration to the Glucose Produced from Kepok Banana

    Science.gov (United States)

    Widodo M, Rohman; Noviyanto, Denny; RM, Faisal

    2016-01-01

    Kepok banana (Musa paradisiaca) is a plant that has many advantagesfrom its fruit, stems, leaves, flowers and cob. However, we just tend to take benefit from the fruit. We grow and harvest the fruit without taking advantages from other parts. So they would be a waste or detrimental to animal nest if not used. The idea to take the benefit from the banana crop yields, especially cob is rarely explored. This study is an introduction to the use of banana weevil especially from the glucose it contains. This study uses current methods of hydrolysis using HCl as a catalyst with the concentration variation of 0.4 N, 0.6 N and 0.8 N and hydrolysis times variation of 20 minutes, 25 minutes and 30 minutes. The stages in the hydrolysis include preparation of materials, the process of hydrolysis and analysis of test results using Fehling and titrate with standard glucose solution. HCl is used as a catalyst because it is cheaper than the enzyme that has the same function. NaOH 60% is used for neutralizing the pH of the filtrate result of hydrolysis. From the results of analysis, known thatthe biggest yield of glucose is at concentration 0.8 N and at 30 minutes reaction, it contains 6.25 gram glucose / 20 gram dry sampel, and the convertion is 27.22% at 20 gram dry sampel.

  1. Footprint (A Screening Model for Estimating the Area of a Plume Produced from Gasoline Containing Ethanol

    Science.gov (United States)

    FOOTPRINT is a simple and user-friendly screening model to estimate the length and surface area of BTEX plumes in ground water produced from a spill of gasoline that contains ethanol. Ethanol has a potential negative impact on the natural biodegradation of BTEX compounds in groun...

  2. Modeling Coastal Vulnerability through Space and Time.

    Directory of Open Access Journals (Sweden)

    Thomas Hopper

    Full Text Available Coastal ecosystems experience a wide range of stressors including wave forces, storm surge, sea-level rise, and anthropogenic modification and are thus vulnerable to erosion. Urban coastal ecosystems are especially important due to the large populations these limited ecosystems serve. However, few studies have addressed the issue of urban coastal vulnerability at the landscape scale with spatial data that are finely resolved. The purpose of this study was to model and map coastal vulnerability and the role of natural habitats in reducing vulnerability in Jamaica Bay, New York, in terms of nine coastal vulnerability metrics (relief, wave exposure, geomorphology, natural habitats, exposure, exposure with no habitat, habitat role, erodible shoreline, and surge under past (1609, current (2015, and future (2080 scenarios using InVEST 3.2.0. We analyzed vulnerability results both spatially and across all time periods, by stakeholder (ownership and by distance to damage from Hurricane Sandy. We found significant differences in vulnerability metrics between past, current and future scenarios for all nine metrics except relief and wave exposure. The marsh islands in the center of the bay are currently vulnerable. In the future, these islands will likely be inundated, placing additional areas of the shoreline increasingly at risk. Significant differences in vulnerability exist between stakeholders; the Breezy Point Cooperative and Gateway National Recreation Area had the largest erodible shoreline segments. Significant correlations exist for all vulnerability (exposure/surge and storm damage combinations except for exposure and distance to artificial debris. Coastal protective features, ranging from storm surge barriers and levees to natural features (e.g. wetlands, have been promoted to decrease future flood risk to communities in coastal areas around the world. Our methods of combining coastal vulnerability results with additional data and across

  3. Modeling Coastal Vulnerability through Space and Time.

    Science.gov (United States)

    Hopper, Thomas; Meixler, Marcia S

    2016-01-01

    Coastal ecosystems experience a wide range of stressors including wave forces, storm surge, sea-level rise, and anthropogenic modification and are thus vulnerable to erosion. Urban coastal ecosystems are especially important due to the large populations these limited ecosystems serve. However, few studies have addressed the issue of urban coastal vulnerability at the landscape scale with spatial data that are finely resolved. The purpose of this study was to model and map coastal vulnerability and the role of natural habitats in reducing vulnerability in Jamaica Bay, New York, in terms of nine coastal vulnerability metrics (relief, wave exposure, geomorphology, natural habitats, exposure, exposure with no habitat, habitat role, erodible shoreline, and surge) under past (1609), current (2015), and future (2080) scenarios using InVEST 3.2.0. We analyzed vulnerability results both spatially and across all time periods, by stakeholder (ownership) and by distance to damage from Hurricane Sandy. We found significant differences in vulnerability metrics between past, current and future scenarios for all nine metrics except relief and wave exposure. The marsh islands in the center of the bay are currently vulnerable. In the future, these islands will likely be inundated, placing additional areas of the shoreline increasingly at risk. Significant differences in vulnerability exist between stakeholders; the Breezy Point Cooperative and Gateway National Recreation Area had the largest erodible shoreline segments. Significant correlations exist for all vulnerability (exposure/surge) and storm damage combinations except for exposure and distance to artificial debris. Coastal protective features, ranging from storm surge barriers and levees to natural features (e.g. wetlands), have been promoted to decrease future flood risk to communities in coastal areas around the world. Our methods of combining coastal vulnerability results with additional data and across multiple time

  4. Stochastic modelling of Listeria monocytogenes single cell growth in cottage cheese with mesophilic lactic acid bacteria from aroma producing cultures

    DEFF Research Database (Denmark)

    Østergaard, Nina Bjerre; Christiansen, Lasse Engbo; Dalgaard, Paw

    2015-01-01

    . 2014. Modelling the effect of lactic acid bacteria from starter- and aroma culture on growth of Listeria monocytogenes in cottage cheese. International Journal of Food Microbiology. 188, 15-25]. Growth of L. monocytogenes single cells, using lag time distributions corresponding to three different......A stochastic model was developed for simultaneous growth of low numbers of Listeria monocytogenes and populations of lactic acid bacteria from the aroma producing cultures applied in cottage cheese. During more than two years, different batches of cottage cheese with aroma culture were analysed...

  5. Real time model for public transportation management

    Directory of Open Access Journals (Sweden)

    Ireneusz Celiński

    2014-03-01

    Full Text Available Background: The article outlines managing a public transportation fleet in the dynamic aspect. There are currently many technical possibilities of identifying demand in the transportation network. It is also possible to indicate legitimate basis of estimating and steering demand. The article describes a general public transportation fleet management concept based on balancing demand and supply. Material and methods: The presented method utilizes a matrix description of demand for transportation based on telemetric and telecommunication data. Emphasis was placed mainly on a general concept and not the manner in which data was collected by other researchers.  Results: The above model gave results in the form of a system for managing a fleet in real-time. The objective of the system is also to optimally utilize means of transportation at the disposal of service providers. Conclusions: The presented concept enables a new perspective on managing public transportation fleets. In case of implementation, the project would facilitate, among others, designing dynamic timetables, updated based on observed demand, and even designing dynamic points of access to public transportation lines. Further research should encompass so-called rerouting based on dynamic measurements of the characteristics of the transportation system.

  6. Stochastic modelling of Listeria monocytogenes single cell growth in cottage cheese with mesophilic lactic acid bacteria from aroma producing cultures.

    Science.gov (United States)

    Østergaard, Nina Bjerre; Christiansen, Lasse Engbo; Dalgaard, Paw

    2015-07-02

    A stochastic model was developed for simultaneous growth of low numbers of Listeria monocytogenes and populations of lactic acid bacteria from the aroma producing cultures applied in cottage cheese. During more than two years, different batches of cottage cheese with aroma culture were analysed for pH, lactic acid concentration and initial concentration of lactic acid bacteria. These data and bootstrap sampling were used to represent product variability in the stochastic model. Lag time data were estimated from observed growth data (lactic acid bacteria) and from literature on L. monocytogenes single cells. These lag time data were expressed as relative lag times and included in growth models. A stochastic model was developed from an existing deterministic growth model including the effect of five environmental factors and inter-bacterial interaction [Østergaard, N.B, Eklöw, A and Dalgaard, P. 2014. Modelling the effect of lactic acid bacteria from starter- and aroma culture on growth of Listeria monocytogenes in cottage cheese. International Journal of Food Microbiology. 188, 15-25]. Growth of L. monocytogenes single cells, using lag time distributions corresponding to three different stress levels, was simulated. The simulated growth was subsequently compared to growth of low concentrations (0.4-1.0 CFU/g) of L. monocytogenes in cottage cheese, exposed to similar stresses, and in general a good agreement was observed. In addition, growth simulations were performed using population relative lag time distributions for L. monocytogenes as reported in literature. Comparably good predictions were obtained as for the simulations performed using lag time data for individual cells of L. monocytogenes. Therefore, when lag time data for individual cells are not available, it was suggested that relative lag time distributions for L. monocytogenes can be used as a qualified default assumption when simulating growth of low concentrations of L. monocytogenes. Copyright

  7. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  8. Time series modelling of overflow structures

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.

    1997-01-01

    The dynamics of a storage pipe is examined using a grey-box model based on on-line measured data. The grey-box modelling approach uses a combination of physically-based and empirical terms in the model formulation. The model provides an on-line state estimate of the overflows, pumping capacities...... to the overflow structures. The capacity of a pump draining the storage pipe has been estimated for two rain events, revealing that the pump was malfunctioning during the first rain event. The grey-box modelling approach is applicable for automated on-line surveillance and control. (C) 1997 IAWQ. Published...

  9. Magnetic-time model for seed germination | Mahajan | African ...

    African Journals Online (AJOL)

    On the basis of this, a new germination model called magnetic time model is developed which was incorporated in hydrothermal model and hence nominated as hydrothermal magnetic time model which is proposed to incorporate the effect of magnetic field of different intensities on plants. Magnetic time constant ΘB is ...

  10. Continuous time structural equation modeling with R package ctsem

    NARCIS (Netherlands)

    Driver, C.C.; Oud, J.H.L.; Völkle, M.C.

    2017-01-01

    We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1) and time series (N = 1) data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models) in the social and behavioural sciences are discrete time models. An

  11. Time series sightability modeling of animal populations.

    Directory of Open Access Journals (Sweden)

    Althea A ArchMiller

    Full Text Available Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model with year-specific parameters and a temporally-smoothed model (TS model that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  12. Modeling of phosphorus fluxes produced by wild fires at watershed scales.

    Science.gov (United States)

    Matyjasik, M.; Hernandez, M.; Shaw, N.; Baker, M.; Fowles, M. T.; Cisney, T. A.; Jex, A. P.; Moisen, G.

    2017-12-01

    River runoff is one of the controlling processes in the terrestrial phosphorus cycle. Phosphorus is often a limiting factor in fresh water. One of the factors that has not been studied and modeled in detail is phosporus flux produced from forest wild fires. Phosphate released by weathering is quickly absorbed in soils. Forest wild fires expose barren soils to intensive erosion, thus releasing relatively large fluxes of phosphorus. Measurements from three control burn sites were used to correlate erosion with phosphorus fluxes. These results were used to model phosphorus fluxes from burned watersheds during a five year long period after fires occurred. Erosion in our model is simulated using a combination of two models: the WEPP (USDA Water Erosion Prediction Project) and the GeoWEPP (GIS-based Water Erosion Prediction Project). Erosion produced from forest disturbances is predicted for any watershed using hydrologic, soil, and meteorological data unique to the individual watersheds or individual slopes. The erosion results are modified for different textural soil classes and slope angles to model fluxes of phosphorus. The results of these models are calibrated using measured concentrations of phosphorus for three watersheds located in the Interior Western United States. The results will help the United States Forest Service manage phosporus fluxes in national forests.

  13. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  14. Modeling Information Accumulation in Psychological Tests Using Item Response Times

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2015-01-01

    In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…

  15. Long-time asymptotics for polymerization models

    OpenAIRE

    Calvo, Juan; Doumic, Marie; Perthame, Benoît

    2017-01-01

    This study is devoted to the long-term behavior of nucleation, growth and fragmentation equations, modeling the spontaneous formation and kinetics of large polymers in a spatially homogeneous and closed environment. Such models are, for instance, commonly used in the biophysical community in order to model in vitro experiments of fibrillation. We investigate the interplay between four processes: nucleation, polymeriza-tion, depolymerization and fragmentation. We first revisit the well-known L...

  16. Time series sightability modeling of animal populations

    Science.gov (United States)

    ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.

    2018-01-01

    Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  17. MATHEMATICAL MODEL DESIGNATED FOR THE ASSESSMENT OF THE INTEGRATED ENVIRONMENTAL LOAD PRODUCED BY A BUILDING PROJECT

    OpenAIRE

    Lapidus Azariy Abramovich; Berezhnyy Aleksandr Yurevich

    2012-01-01

    In the paper, the author proposes a mathematical model designated for the assessment of the ecological impact produced on the environment within the territory of the construction site. Integrated index EI (Environmental Index) is introduced as a vehicle designated for the evaluation of the ecological load. EI represents the intensity of the ecological load, or a generalized and optimized parameter reflecting the intensity of the anthropogenic impact of the construction site onto the natural e...

  18. Forecasting the Reference Evapotranspiration Using Time Series Model

    Directory of Open Access Journals (Sweden)

    H. Zare Abyaneh

    2016-10-01

    evapotranspiration were obtained. The mean values of evapotranspiration in the study period were 4.42, 3.93, 5.05, 5.49, and 5.60 mm day−1 in Esfahan, Semnan, Shiraz, Kerman, and Yazd, respectively. The Augmented Dickey-Fuller (ADF test was performed to the time series. The results showed that in all stations except Shiraz, time series had unit root and were non-stationary. The non-stationary time series became stationary at 1st difference. Using the EViews 7 software, the seasonal ARIMA models were applied to the evapotranspiration time series and R2 coefficient of determination, Durbin–Watson statistic (DW, Hannan-Quinn (HQ, Schwarz (SC and Akaike information criteria (AIC were used to determine, the best models for the stations were selected. The selected models were listed in Table 2. Moreover, information criteria (AIC, SC, and HQ were used to assess model parsimony. The independence assumption of the model residuals was confirmed by a sensitive diagnostic check. Furthermore, the homoscedasticity and normality assumptions were tested using other diagnostics tests. Table 2- The selected time series models for the stations Station\tSeasonal ARIMA model\tInformation criteria\tR2\tDW SC\tHQ\tAIC Esfahan\tARIMA(1, 1, 1×(1, 0, 112\t1.2571\t1.2840\t1.2396\t0.8800\t1.9987 Semnan\tARIMA(5, 1, 2×(1, 0, 112\t1.5665\t1.5122\t1.4770\t0.8543\t1.9911 Shiraz\tARIMA(2, 0, 3×(1, 0, 112\t1.3312\t1.2881\t1.2601\t0.9665\t1.9873 Kerman\tARIMA(5, 1, 1×(1, 0, 112\t1.8097\t1.7608\t1.8097\t0.8557\t2.0042 Yazd\tARIMA(2, 1, 3×(1, 1, 112\t1.7472\t1.7032\t1.6746\t0.5264\t1.9943 The seasonal ARIMA models presented in Table 2, were used at the 12 months (2004-2005 forecasting horizon. The results showed that the models produce good out-of-sample forecasts, which in all the stations the lowest correlation coefficient and the highest root mean square error were obtained 0.988 and 0.515 mm day−1, respectively. Conclusion: In the presented paper, reference evapotranspiration in the five synoptic

  19. Hierarchical Bayes Models for Response Time Data

    Science.gov (United States)

    Craigmile, Peter F.; Peruggia, Mario; Van Zandt, Trisha

    2010-01-01

    Human response time (RT) data are widely used in experimental psychology to evaluate theories of mental processing. Typically, the data constitute the times taken by a subject to react to a succession of stimuli under varying experimental conditions. Because of the sequential nature of the experiments there are trends (due to learning, fatigue,…

  20. Attributing impacts to emissions traced to major fossil energy and cement producers over specific historical time periods

    Science.gov (United States)

    Ekwurzel, B.; Frumhoff, P. C.; Allen, M. R.; Boneham, J.; Heede, R.; Dalton, M. W.; Licker, R.

    2017-12-01

    Given the progress in climate change attribution research over the last decade, attribution studies can inform policymakers guided by the UNFCCC principle of "common but differentiated responsibilities." Historically this has primarily focused on nations, yet requests for information on the relative role of the fossil energy sector are growing. We present an approach that relies on annual CH4 and CO2 emissions from production through to the sale of products from the largest industrial fossil fuel and cement production company records from the mid-nineteenth century to present (Heede 2014). Analysis of the global trends with all the natural and human drivers compared with a scenario without the emissions traced to major carbon producers over full historical versus select periods of recent history can be policy relevant. This approach can be applied with simple climate models and earth system models depending on the type of climate impacts being investigated. For example, results from a simple climate model, using best estimate parameters and emissions traced to 90 largest carbon producers, illustrate the relative difference in global mean surface temperature increase over 1880-2010 after removing these emissions from 1980-2010 (29-35%) compared with removing these emissions over 1880-2010 (42-50%). The changing relative contributions from the largest climate drivers can be important to help assess the changing risks for stakeholders adapting to and reducing exposure and vulnerability to regional climate change impacts.

  1. Continuous Time Structural Equation Modeling with R Package ctsem

    Directory of Open Access Journals (Sweden)

    Charles C. Driver

    2017-04-01

    Full Text Available We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1 and time series (N = 1 data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models in the social and behavioural sciences are discrete time models. An assumption of discrete time models is that time intervals between measurements are equal, and that all subjects were assessed at the same intervals. Violations of this assumption are often ignored due to the difficulty of accounting for varying time intervals, therefore parameter estimates can be biased and the time course of effects becomes ambiguous. By using stochastic differential equations to estimate an underlying continuous process, continuous time models allow for any pattern of measurement occasions. By interfacing to OpenMx, ctsem combines the flexible specification of structural equation models with the enhanced data gathering opportunities and improved estimation of continuous time models. ctsem can estimate relationships over time for multiple latent processes, measured by multiple noisy indicators with varying time intervals between observations. Within and between effects are estimated simultaneously by modeling both observed covariates and unobserved heterogeneity. Exogenous shocks with different shapes, group differences, higher order diffusion effects and oscillating processes can all be simply modeled. We first introduce and define continuous time models, then show how to specify and estimate a range of continuous time models using ctsem.

  2. Adaptive Modeling and Real-Time Simulation

    Science.gov (United States)

    1984-01-01

    34 Artificial Inteligence , Vol. 13, pp. 27-39 (1980). Describes circumscription which is just the assumption that everything that is known to have a particular... Artificial Intelligence Truth Maintenance Planning Resolution Modeling Wcrld Models ~ .. ~2.. ASSTR AT (Coninue n evrse sieIf necesaran Identfy by...represents a marriage of (1) the procedural-network st, planning technology developed in artificial intelligence with (2) the PERT/CPM technology developed in

  3. A congested and dwell time dependent transit corridor assignment model

    OpenAIRE

    Alonso Oreña, Borja; Muñoz, Juan Carlos; Ibeas Portilla, Ángel; Moura Berodia, José Luis

    2016-01-01

    This research proposes an equilibrium assignment model for congested public transport corridors in urban areas. In this model, journey times incorporate the effect of bus queuing on travel times and boarding and alighting passengers on dwell times at stops. The model also considers limited bus capacity leading to longer waiting times and more uncomfortable journeys. The proposed model is applied to an example network, and the results are compared with those obtained in a recent study. This is...

  4. Three dimensional modeling on airflow, heat and mass transfer in partially impermeable enclosure containing agricultural produce during natural convective cooling

    International Nuclear Information System (INIS)

    Chourasia, M.K.; Goswami, T.K.

    2007-01-01

    A three dimensional model was developed to simulate the transport phenomena in heat and mass generating porous medium cooled under natural convective environment. Unlike the previous works on this aspect, the present model was aimed for bulk stored agricultural produce contained in a permeable package placed on a hard surface. This situation made the bottom of the package impermeable to fluid flow as well as moisture transfer and adiabatic to heat transfer. The velocity vectors, isotherms and contours of rate of moisture loss were presented during transient cooling as well as at steady state using the commercially available computational fluid dynamics (CFD) code based on the finite volume technique. The CFD model was validated using the experimental data on the time-temperature history as well as weight loss obtained from a bag of potatoes kept in a cold store. The simulated and experimental values on temperature and moisture loss of the product were found to be in good agreement

  5. Real-Time Vocal Tract Modelling

    Directory of Open Access Journals (Sweden)

    K. Benkrid

    2008-03-01

    Full Text Available To date, most speech synthesis techniques have relied upon the representation of the vocal tract by some form of filter, a typical example being linear predictive coding (LPC. This paper describes the development of a physiologically realistic model of the vocal tract using the well-established technique of transmission line modelling (TLM. This technique is based on the principle of wave scattering at transmission line segment boundaries and may be used in one, two, or three dimensions. This work uses this technique to model the vocal tract using a one-dimensional transmission line. A six-port scattering node is applied in the region separating the pharyngeal, oral, and the nasal parts of the vocal tract.

  6. Role of thyrotropin-releasing hormone in prolactin-producing cell models.

    Science.gov (United States)

    Kanasaki, Haruhiko; Oride, Aki; Mijiddorj, Tselmeg; Kyo, Satoru

    2015-12-01

    Thyrotropin-releasing hormone (TRH) is a hypothalamic hypophysiotropic neuropeptide that was named for its ability to stimulate the release of thyroid-stimulating hormone in mammals. It later became apparent that it exerts a number of species-dependent hypophysiotropic activities that regulate other pituitary hormones. TRH also regulates the synthesis and release of prolactin, although whether it is a physiological regulator of prolactin that remains unclear. Occupation of the Gq protein-coupled TRH receptor in the prolactin-producing lactotroph increases the turnover of inositol, which in turn activates the protein kinase C pathway and the release of Ca(2+) from storage sites. TRH-induced signaling events also include the activation of extracellular signal-regulated kinase (ERK) and induction of MAP kinase phosphatase, an inactivator of activated ERK. TRH stimulates prolactin synthesis through the activation of ERK, whereas prolactin release occurs via elevation of intracellular Ca(2+). We have been investigating the role of TRH in a pituitary prolactin-producing cell model. Rat pituitary somatolactotroph GH3 cells, which produce and release both prolactin and growth hormone (GH), are widely used as a model for the study of prolactin- and GH-secreting cells. In this review, we describe the general action of TRH as a hypophysiotropic factor in vertebrates and focus on the role of TRH in prolactin synthesis using GH3 cells. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Time-dependent H-like and He-like Al lines produced by ultra-short pulse laser

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Takako; Kato, Masatoshi [National Inst. for Fusion Science, Nagoya (Japan); Shepherd, R.; Young, B.; More, R.; Osterheld, Al

    1998-03-01

    We have performed numerical modeling of time-resolved x-ray spectra from thin foil targets heated by the LLNL Ultra-short pulse (USP) laser. The targets were aluminum foils of thickness ranging from 250 A to 1250 A, heated with 120 fsec pulses of 400 nm light from the USP laser. The laser energy was approximately 0.2 Joules, focused to a 3 micron spot size for a peak intensity near 2 x 10{sup 19} W/cm{sup 2}. Ly{alpha} and He{alpha} lines were recorded using a 900 fsec x-ray streak camera. We calculate the effective ionization, recombination and emission rate coefficients including density effects for H-like and He-like aluminum ions using a collisional radiative model. We calculate time-dependent ion abundances using these effective ionization and recombination rate coefficients. The time-dependent electron temperature and density used in the calculation are based on an analytical model for the hydrodynamic expansion of the target foils. During the laser pulse the target is ionized. After the laser heating stops, the plasma begins to recombine. Using the calculated time dependent ion abundances and the effective emission rate coefficients, we calculate the time dependent Ly{alpha} and He{alpha} lines. The calculations reproduce the main qualitative features of the experimental spectra. (author)

  8. Modeling Venus-Like Worlds Through Time

    OpenAIRE

    Way, M. J.; Del Genio, Anthony; Amundsen, David S.

    2018-01-01

    We explore the atmospheric and surface history of a hypothetical paleo-Venus climate using a 3-D General Circulation Model. We constrain our model with the in-situ and remote sensing Venus data available today. Given that Venus and Earth are believed to be similar geochemically some aspects of Earth's history are also utilized. We demonstrate that it is possible for ancient Venus and Venus-like exoplanetary worlds to exist within the liquid water habitable zone with insolations up to nearly 2...

  9. Axiomatics of uniform space-time models

    International Nuclear Information System (INIS)

    Levichev, A.V.

    1983-01-01

    The mathematical statement of space-time axiomatics of the special theory of relativity is given; it postulates that the space-time M is the binding single boundary Hausedorf local-compact four-dimensional topological space with the given order. The theorem is proved: if the invariant order in the four-dimensional group M is given by the semi-group P, which contingency K contains inner points , then M is commutative. The analogous theorem is correct for the group of two and three dimensionalities

  10. Continuous time modeling of panel data by means of SEM

    NARCIS (Netherlands)

    Oud, J.H.L.; Delsing, M.J.M.H.; Montfort, C.A.G.M.; Oud, J.H.L.; Satorra, A.

    2010-01-01

    After a brief history of continuous time modeling and its implementation in panel analysis by means of structural equation modeling (SEM), the problems of discrete time modeling are discussed in detail. This is done by means of the popular cross-lagged panel design. Next, the exact discrete model

  11. A continuous-time control model on production planning network ...

    African Journals Online (AJOL)

    A continuous-time control model on production planning network. DEA Omorogbe, MIU Okunsebor. Abstract. In this paper, we give a slightly detailed review of Graves and Hollywood model on constant inventory tactical planning model for a job shop. The limitations of this model are pointed out and a continuous time ...

  12. Modelling biological pathway dynamics with Timed Automata

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Urquidi Camacho, R.A.; Wanders, B.; van der Vet, P.E.; Karperien, Hermanus Bernardus Johannes; Langerak, Romanus; van de Pol, Jan Cornelis; Post, Janine Nicole

    2012-01-01

    When analysing complex interaction networks occurring in biological cells, a biologist needs computational support in order to understand the effects of signalling molecules (e.g. growth factors, drugs). ANIMO (Analysis of Networks with Interactive MOdelling) is a tool that allows the user to create

  13. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    The effect of receiver coil alignment errors δ on the response of electromagnetic measurements in a layered earth model is studied. The statistics of generalized least square inverse was employed to analyzed the errors on three different geophysical applications. The following results were obtained: (i) The FEM ellipiticity is ...

  14. Numerical time integration for air pollution models

    NARCIS (Netherlands)

    J.G. Verwer (Jan); W. Hundsdorfer (Willem); J.G. Blom (Joke)

    1998-01-01

    textabstractDue to the large number of chemical species and the three space dimensions, off-the-shelf stiff ODE integrators are not feasible for the numerical time integration of stiff systems of advection-diffusion-reaction equations [ fracpar{c{t + nabla cdot left( vu{u c right) = nabla cdot left(

  15. Modeling Time Series Data for Supervised Learning

    Science.gov (United States)

    Baydogan, Mustafa Gokce

    2012-01-01

    Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…

  16. Modeling of wear behavior of Al/B{sub 4}C composites produced by powder metallurgy

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Ismail; Bektas, Asli [Gazi Univ., Ankara (Turkey). Dept. of Industrial Design Engineering; Guel, Ferhat; Cinci, Hanifi [Gazi Univ., Ankara (Turkey). Dept. of Materials and Metallurgy Engineering

    2017-06-01

    Wear characteristics of composites, Al matrix reinforced with B{sub 4}C particles percentages of 5, 10,15 and 20 produced by the powder metallurgy method were studied in this study. For this purpose, a mixture of Al and B{sub 4}C powders were pressed under 650 MPa pressure and then sintered at 635 C. The analysis of hardness, density and microstructure was performed. The produced samples were worn using a pin-on-disk abrasion device under 10, 20 and 30 N load through 500, 800 and 1200 mesh SiC abrasive papers. The obtained wear values were implemented in an artificial neural network (ANN) model having three inputs and one output using feed forward backpropagation Levenberg-Marquardt algorithm. Thus, the optimum wear conditions and hardness values were determined.

  17. Modeling of wear behavior of Al/B4C composites produced by powder metallurgy

    International Nuclear Information System (INIS)

    Sahin, Ismail; Bektas, Asli; Guel, Ferhat; Cinci, Hanifi

    2017-01-01

    Wear characteristics of composites, Al matrix reinforced with B 4 C particles percentages of 5, 10,15 and 20 produced by the powder metallurgy method were studied in this study. For this purpose, a mixture of Al and B 4 C powders were pressed under 650 MPa pressure and then sintered at 635 C. The analysis of hardness, density and microstructure was performed. The produced samples were worn using a pin-on-disk abrasion device under 10, 20 and 30 N load through 500, 800 and 1200 mesh SiC abrasive papers. The obtained wear values were implemented in an artificial neural network (ANN) model having three inputs and one output using feed forward backpropagation Levenberg-Marquardt algorithm. Thus, the optimum wear conditions and hardness values were determined.

  18. Time-Dependent Networks as Models to Achieve Fast Exact Time-Table Queries

    DEFF Research Database (Denmark)

    Brodal, Gert Stølting; Jacob, Rico

    2003-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  19. Time-dependent Networks as Models to Achieve Fast Exact Time-table Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jacob, Rico

    2001-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  20. Rapid detection of aflatoxin producing fungi in food by real-time quantitative loop-mediated isothermal amplification.

    Science.gov (United States)

    Luo, Jie; Vogel, Rudi F; Niessen, Ludwig

    2014-12-01

    Aflatoxins represent a serious risk for human and animal health. They are mainly produced by Aspergillus flavus and Aspergillus parasiticus but also by Aspergillus nomius. Three species specific turbidimeter based real-time LAMP (loop-mediated isothermal amplification) assays were developed to quantify the three species individually in conidial solutions and to define contamination levels in samples of shelled Brazil nuts, maize, and peanuts. Standard curves relating spore numbers to time to threshold (Tt) values were set up for each of the species. Assays had detection limits of 10, 100 and 100 conidia per reaction of A. flavus, A. parasiticus, and A. nomius, respectively. Analysis of contaminated sample materials revealed that the A. nomius specific real-time LAMP assay detected a minimum of 10 conidia/g in Brazil nuts while assays specific for A. flavus and A. parasiticus had detection limits of 10(2) conidia/g and 10(5) conidia/g, respectively in peanut samples as well as 10(4) conidia/g and 10(4) conidia/g, respectively in samples of maize. The real-time LAMP assays developed here appear to be promising tools for the prediction of potential aflatoxigenic risk at an early stage and in all critical control points of the food and feed production chain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Semi-empirical model for the generation of dose distributions produced by a scanning electron beam

    International Nuclear Information System (INIS)

    Nath, R.; Gignac, C.E.; Agostinelli, A.G.; Rothberg, S.; Schulz, R.J.

    1980-01-01

    There are linear accelerators (Sagittaire and Saturne accelerators produced by Compagnie Generale de Radiologie (CGR/MeV) Corporation) which produce broad, flat electron fields by magnetically scanning the relatively narrow electron beam as it emerges from the accelerator vacuum system. A semi-empirical model, which mimics the scanning action of this type of accelerator, was developed for the generation of dose distributions in homogeneous media. The model employs the dose distributions of the scanning electron beams. These were measured with photographic film in a polystyrene phantom by turning off the magnetic scanning system. The mean deviation calculated from measured dose distributions is about 0.2%; a few points have deviations as large as 2 to 4% inside of the 50% isodose curve, but less than 8% outside of the 50% isodose curve. The model has been used to generate the electron beam library required by a modified version of a commercially-available computerized treatment-planning system. (The RAD-8 treatment planning system was purchased from the Digital Equipment Corporation. It is currently available from Electronic Music Industries

  2. Comparison of prosthetic models produced by traditional and additive manufacturing methods.

    Science.gov (United States)

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul

    2015-08-01

    The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (Pmanufacturing method (Pmanufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.

  3. Can single classifiers be as useful as model ensembles to produce benthic seabed substratum maps?

    Science.gov (United States)

    Turner, Joseph A.; Babcock, Russell C.; Hovey, Renae; Kendrick, Gary A.

    2018-05-01

    Numerous machine-learning classifiers are available for benthic habitat map production, which can lead to different results. This study highlights the performance of the Random Forest (RF) classifier, which was significantly better than Classification Trees (CT), Naïve Bayes (NB), and a multi-model ensemble in terms of overall accuracy, Balanced Error Rate (BER), Kappa, and area under the curve (AUC) values. RF accuracy was often higher than 90% for each substratum class, even at the most detailed level of the substratum classification and AUC values also indicated excellent performance (0.8-1). Total agreement between classifiers was high at the broadest level of classification (75-80%) when differentiating between hard and soft substratum. However, this sharply declined as the number of substratum categories increased (19-45%) including a mix of rock, gravel, pebbles, and sand. The model ensemble, produced from the results of all three classifiers by majority voting, did not show any increase in predictive performance when compared to the single RF classifier. This study shows how a single classifier may be sufficient to produce benthic seabed maps and model ensembles of multiple classifiers.

  4. Producing high-accuracy lattice models from protein atomic coordinates including side chains.

    Science.gov (United States)

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models.

  5. Co-producing simulation models to inform resource management: a case study from southwest South Dakota

    Science.gov (United States)

    Miller, Brian W.; Symstad, Amy J.; Frid, Leonardo; Fisichelli, Nicholas A.; Schuurman, Gregor W.

    2017-01-01

    Simulation models can represent complexities of the real world and serve as virtual laboratories for asking “what if…?” questions about how systems might respond to different scenarios. However, simulation models have limited relevance to real-world applications when designed without input from people who could use the simulated scenarios to inform their decisions. Here, we report on a state-and-transition simulation model of vegetation dynamics that was coupled to a scenario planning process and co-produced by researchers, resource managers, local subject-matter experts, and climate change adaptation specialists to explore potential effects of climate scenarios and management alternatives on key resources in southwest South Dakota. Input from management partners and local experts was critical for representing key vegetation types, bison and cattle grazing, exotic plants, fire, and the effects of climate change and management on rangeland productivity and composition given the paucity of published data on many of these topics. By simulating multiple land management jurisdictions, climate scenarios, and management alternatives, the model highlighted important tradeoffs between grazer density and vegetation composition, as well as between the short- and long-term costs of invasive species management. It also pointed to impactful uncertainties related to the effects of fire and grazing on vegetation. More broadly, a scenario-based approach to model co-production bracketed the uncertainty associated with climate change and ensured that the most important (and impactful) uncertainties related to resource management were addressed. This cooperative study demonstrates six opportunities for scientists to engage users throughout the modeling process to improve model utility and relevance: (1) identifying focal dynamics and variables, (2) developing conceptual model(s), (3) parameterizing the simulation, (4) identifying relevant climate scenarios and management

  6. MATHEMATICAL MODEL DESIGNATED FOR THE ASSESSMENT OF THE INTEGRATED ENVIRONMENTAL LOAD PRODUCED BY A BUILDING PROJECT

    Directory of Open Access Journals (Sweden)

    Lapidus Azariy Abramovich

    2012-10-01

    The theoretical background of the proposed approach consists in an integrated methodology implemented in the system engineering of construction projects. A building system may be represented as the aggregate of all stages of construction works and participants involved in them. The building system is object-oriented, and it is implemented under the impact of pre-determined environmental factors. The core constituent of the building system represents a Production Technology Module (PTM, or summarized groups of processes. The model formula designated for the assessment of the intensity of the ecological load produced by the construction project onto the environment may be represented as follows:

  7. Time Series Modeling for Structural Response Prediction

    Science.gov (United States)

    1988-11-14

    results for 2nd mode. 69 5. 3DOF simulated data. 71 6. Experimental data. 72 7. Simulated data. 75 8. MPEM estimates for MDOF data with closely spaced...vector Ssteering matrix of residual time series 2DOF Two-degree-of-freedom 2LS Two-stage Least Squares Method 3DOF Three-degree-of-freedom x SUMMARY A...70 Table 5: 3DOF Simulated Data (fd= 1 ,10 ,25 ; C=.01,.0l,.0l; Amp=1,l,l; 256 pts, f,=2000 Hz) Algorithm grv noise higher mode grv, 4th mode, bias 40

  8. With string model to time series forecasting

    Science.gov (United States)

    Pinčák, Richard; Bartoš, Erik

    2015-10-01

    Overwhelming majority of econometric models applied on a long term basis in the financial forex market do not work sufficiently well. The reason is that transaction costs and arbitrage opportunity are not included, as this does not simulate the real financial markets. Analyses are not conducted on the non equidistant date but rather on the aggregate date, which is also not a real financial case. In this paper, we would like to show a new way how to analyze and, moreover, forecast financial market. We utilize the projections of the real exchange rate dynamics onto the string-like topology in the OANDA market. The latter approach allows us to build the stable prediction models in trading in the financial forex market. The real application of the multi-string structures is provided to demonstrate our ideas for the solution of the problem of the robust portfolio selection. The comparison with the trend following strategies was performed, the stability of the algorithm on the transaction costs for long trade periods was confirmed.

  9. Nonlinear Autoregressive Exogenous modeling of a large anaerobic digester producing biogas from cattle waste.

    Science.gov (United States)

    Dhussa, Anil K; Sambi, Surinder S; Kumar, Shashi; Kumar, Sandeep; Kumar, Surendra

    2014-10-01

    In waste-to-energy plants, there is every likelihood of variations in the quantity and characteristics of the feed. Although intermediate storage tanks are used, but many times these are of inadequate capacity to dampen the variations. In such situations an anaerobic digester treating waste slurry operates under dynamic conditions. In this work a special type of dynamic Artificial Neural Network model, called Nonlinear Autoregressive Exogenous model, is used to model the dynamics of anaerobic digesters by using about one year data collected on the operating digesters. The developed model consists of two hidden layers each having 10 neurons, and uses 18days delay. There are five neurons in input layer and one neuron in output layer for a day. Model predictions of biogas production rate are close to plant performance within ±8% deviation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. A Provenance Model for Real-Time Water Information Systems

    Science.gov (United States)

    Liu, Q.; Bai, Q.; Zednik, S.; Taylor, P.; Fox, P. A.; Taylor, K.; Kloppers, C.; Peters, C.; Terhorst, A.; West, P.; Compton, M.; Shu, Y.; Provenance Management Team

    2010-12-01

    Generating hydrological data products, such as flow forecasts, involves complex interactions among instruments, data simulation models, computational facilities and data providers. Correct interpretation of the data produced at various stages requires good understanding of how data was generated or processed. Provenance describes the lineage of a data product. Making provenance information accessible to hydrologists and decision makers not only helps to determine the data’s value, accuracy and authorship, but also enables users to determine the trustworthiness of the data product. In the water domain, WaterML2 [1] is an emerging standard which describes an information model and format for the publication of water observations data in XML. The W3C semantic sensor network incubator group (SSN-XG) [3] is producing ontologies for the description of sensor configurations. By integrating domain knowledge of this kind into the provenance information model, the integrated information model will enable water domain researchers and water resource managers to better analyse how observations and derived data products were generated. We first introduce the Proof Mark Language (PML2) [2], WaterML2 and the SSN-XG sensor ontology as the proposed provenance representation formalism. Then we describe some initial implementations how these standards could be integrated to represent the lineage of water information products. Finally we will highlight how the provenance model for a distributed real-time water information system assists the interpretation of the data product and establishing trust. Reference [1] Taylor, P., Walker, G., Valentine, D., Cox, Simon: WaterML2.0: Harmonising standards for water observation data. Geophysical Research Abstracts. Vol. 12. [2] da Silva, P.P., McGuinness, D.L., Fikes, R.: A proof markup language for semantic web services. Inf. Syst. 31(4) (2006), 381-395. [3] W3C Semantic Sensor Network Incubator Group http://www.w3.org/2005/Incubator

  11. Classification of Pecorino cheeses produced in Italy according to their ripening time and manufacturing technique using Fourier transform infrared spectroscopy.

    Science.gov (United States)

    Lerma-García, M J; Gori, A; Cerretani, L; Simó-Alfonso, E F; Caboni, M F

    2010-10-01

    Fourier-transform infrared spectroscopy, followed by linear discriminant analysis of the spectral data, was used to classify Italian Pecorino cheeses according to their ripening time and manufacturing technique. The Fourier transform infrared spectra of the cheeses were divided into 18 regions and the normalized absorbance peak areas within these regions were used as predictors. Linear discriminant analysis models were constructed to classify Pecorino cheeses according to different ripening stages (hard and semi-hard) or according to their manufacturing technique (fossa and nonfossa cheeses). An excellent resolution was achieved according to both ripening time and manufacturing technique. Also, a final linear discriminant analysis model considering the 3 categories (hard nonfossa, hard fossa, and semi-hard nonfossa) was constructed. A good resolution among the 3 categories was obtained. Copyright © 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  13. A New Battery Energy Storage Charging/Discharging Scheme for Wind Power Producers in Real-Time Markets

    Directory of Open Access Journals (Sweden)

    Minh Y Nguyen

    2012-12-01

    Full Text Available Under a deregulated environment, wind power producers are subject to many regulation costs due to the intermittence of natural resources and the accuracy limits of existing prediction tools. This paper addresses the operation (charging/discharging problem of battery energy storage installed in a wind generation system in order to improve the value of wind power in the real-time market. Depending on the prediction of market prices and the probabilistic information of wind generation, wind power producers can schedule the battery energy storage for the next day in order to maximize the profit. In addition, by taking into account the expenses of using batteries, the proposed charging/discharging scheme is able to avoid the detrimental operation of battery energy storage which can lead to a significant reduction of battery lifetime, i.e., uneconomical operation. The problem is formulated in a dynamic programming framework and solved by a dynamic programming backward algorithm. The proposed scheme is then applied to the study cases, and the results of simulation show its effectiveness.

  14. Progesterone supplementation during the time of pregnancy recognition after artificial insemination improves conception rates in high-producing dairy cows.

    Science.gov (United States)

    Garcia-Ispierto, I; López-Helguera, I; Serrano-Pérez, B; Paso, V; Tuono, T; Ramon, A; Mur-Novales, R; Tutusaus, J; López-Gatius, F

    2016-04-15

    This study examines the possible effects of progesterone (P4) supplementation during the time of pregnancy recognition, from Days 15 to 17 post-artificial insemination (AI), on reproductive performance in high-producing dairy cows. Cows in their 15th day post-AI were alternately assigned to a control, no-treatment group (C: n = 257) or treatment group (P4: n = 287) on a weekly rotational basis according to the chronologic order of their gynecologic visit. On the basis of the odds ratio, the interaction treatment × previous placenta retention had a significant effect (P = 0.02) on conception rate. Thus, cows in P4 that had not suffered a retained placenta were 1.6 times more likely to conceive 28 to 34 days post-AI than the remaining cows. In nonpregnant cows, treatment had no effect on subsequent return to estrus or AI interval and neither were any effects of treatment observed on twin pregnancy and early fetal loss rates. The results of this study demonstrate the efficacy of P4 supplementations during the time of pregnancy recognition after AI in cows without a clinical history of placenta retention. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Multiplex real-time PCR probe-based for identification of strains producing: OXA48, VIM, KPC and NDM.

    Science.gov (United States)

    Favaro, Marco; Sarti, Mario; Fontana, Carla

    2014-11-01

    The spread of multi-resistant enterobacteria, particularly carbapenem-resistant Enterobacteriaceae (CRE), in both community and hospital settings is a global problem. The phenotypic identification of CRE is complex, occasionally inconclusive and time consuming. However, commercially available molecular assays are very expensive, and many do not allow the simultaneous identification of all genetic markers of resistance that have been recognised in CRE (bla KPC, bla OXA-48, bla VIM and bla NDM). The aim of the present study is to describe a new test: a multiplex real time PCR probe-based assay designed for the simultaneous detection of KPC, OXA-48, VIM and NDM in a short time (no longer than 90 min from the extraction of DNA to detection). Our assay correctly identified 63 CRE isolates and all standard reference strains tested, in agreement with and extending the results of phenotypic identification tests; additionally, a KPC-VIM co-expressing Enterobacter aerogenes isolate was identified using the new assay, whereas traditional methods failed to detect it. The assay was also able to correctly detect 28 CRE-producers from 50 positive blood cultures, again detecting, in four specimens, the presence of CRE co-expressing KPC and VIM, which were only partially identified by traditional methods. Finally, when used directly on rectal swabs, the assay enabled the identification of CRE-carrier patients, for whom isolation is mandatory in a hospital setting.

  16. Modeling of a Reaction-Distillation-Recycle System to Produce Dimethyl Ether through Methanol Dehydration

    Science.gov (United States)

    Muharam, Y.; Zulkarnain, L. M.; Wirya, A. S.

    2018-03-01

    The increase in the dimethyl ether yield through methanol dehydration due to a recycle integration to a reaction-distillation system was studied in this research. A one-dimensional phenomenological model of a methanol dehydration reactor and a shortcut model of distillation columns were used to achieve the aim. Simulation results show that 10.7 moles/s of dimethyl ether is produced in a reaction-distillation system with the reactor length being 4 m, the reactor inlet pressure being 18 atm, the reactor inlet temperature being 533 K, the reactor inlet velocity being 0.408 m/s, and the distillation pressure being 8 atm. The methanol conversion is 90% and the dimethyl ether yield is 48%. The integration of the recycle stream to the system increases the dimethyl ether yield by 8%.

  17. A new G-M counter dead time model

    International Nuclear Information System (INIS)

    Lee, S.H.; Gardner, R.P.

    2000-01-01

    A hybrid G-M counter dead time model was derived by combining the idealized paralyzable and non-paralyzable models. The new model involves two parameters, which are the paralyzable and non-paralyzable dead times. The dead times used in the model are very closely related to the physical dead time of the G-M tube and its resolving time. To check the validity of the model, the decaying source method with 56 Mn was used. The corrected counting rates by the new G-M dead time model were compared with the observed counting rates obtained from the measurement and gave very good agreement within 5% up to 7x10 4 counts/s for a G-M tube with a dead time of about 300 μs

  18. A Model for Industrial Real-Time Systems

    DEFF Research Database (Denmark)

    Bin Waez, Md Tawhid; Wasowski, Andrzej; Dingel, Juergen

    2015-01-01

    Introducing automated formal methods for large industrial real-time systems is an important research challenge. We propose timed process automata (TPA) for modeling and analysis of time-critical systems which can be open, hierarchical, and dynamic. The model offers two essential features for large...... industrial systems: (i) compositional modeling with reusable designs for different contexts, and (ii) an automated state-space reduction technique. Timed process automata model dynamic networks of continuous-time communicating control processes which can activate other processes. We show how to automatically...

  19. Time-of-Flight Measurement of a 355-nm Nd:YAG Laser-Produced Aluminum Plasma

    Directory of Open Access Journals (Sweden)

    M. F. Baclayon

    2003-06-01

    Full Text Available An aluminum target in air was irradiated by a 355-nm Nd:YAG laser with a pulse width of 10 ns and arepetition rate of 10 Hz. The emission spectra of the laser-produced aluminum plasma were investigatedwith varying distances from the target surface. The results show the presence of a strong continuum veryclose to the target surface, but as the plasma evolve in space, the continuum gradually disappears and theemitted spectra are dominated by stronger line emissions. The observed plasma species are the neutraland singly ionized aluminum and their speeds were investigated using an optical time-of-flight measurementtechnique. Results show that the speeds of the plasma species decreases gradually with distance from thetarget surface. Comparison of the computed speeds of the plasma species shows that the singly ionizedspecies have relatively greater kinetic energy than the neutral species.

  20. Preference as a Function of Active Interresponse Times: A Test of the Active Time Model

    Science.gov (United States)

    Misak, Paul; Cleaveland, J. Mark

    2011-01-01

    In this article, we describe a test of the active time model for concurrent variable interval (VI) choice. The active time model (ATM) suggests that the time since the most recent response is one of the variables controlling choice in concurrent VI VI schedules of reinforcement. In our experiment, pigeons were trained in a multiple concurrent…

  1. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.

  2. Error-analysis and comparison to analytical models of numerical waveforms produced by the NRAR Collaboration

    International Nuclear Information System (INIS)

    Hinder, Ian; Wardell, Barry; Alic, Daniela; Buonanno, Alessandra; Pan, Yi; Boyle, Michael; Etienne, Zachariah B; Healy, James; Johnson-McDaniel, Nathan K; Nagar, Alessandro; Nakano, Hiroyuki; Pfeiffer, Harald P; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A; Sperhake, Ulrich; Szilágyi, Bela; Zenginoğlu, Anıl; Schnetter, Erik; Tichy, Wolfgang

    2013-01-01

    The Numerical–Relativity–Analytical–Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ∼100–200M ⊙ , current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters. (paper)

  3. Marginal integrity of restorations produced with a model composite based on polyhedral oligomeric silsesquioxane (POSS

    Directory of Open Access Journals (Sweden)

    Luciano Ribeiro CORREA NETTO

    2015-10-01

    Full Text Available Marginal integrity is one of the most crucial aspects involved in the clinical longevity of resin composite restorations.Objective To analyze the marginal integrity of restorations produced with a model composite based on polyhedral oligomeric silsesquioxane (POSS.Material and Methods A base composite (B was produced with an organic matrix with UDMA/TEGDMA and 70 wt.% of barium borosilicate glass particles. To produce the model composite, 25 wt.% of UDMA were replaced by POSS (P25. The composites P90 and TPH3 (TP3 were used as positive and negative controls, respectively. Marginal integrity (%MI was analyzed in bonded class I cavities. The volumetric polymerization shrinkage (%VS and the polymerization shrinkage stress (Pss - MPa were also evaluated.Results The values for %MI were as follows: P90 (100% = TP3 (98.3% = B (96.9% > P25 (93.2%, (p<0.05. The %VS ranged from 1.4% (P90 to 4.9% (P25, while Pss ranged from 2.3 MPa (P90 to 3.9 MPa (B. For both properties, the composite P25 presented the worst results (4.9% and 3.6 MPa. Linear regression analysis showed a strong positive correlation between %VS and Pss (r=0.97, whereas the correlation between Pss and %MI was found to be moderate (r=0.76.Conclusions The addition of 25 wt.% of POSS in methacrylate organic matrix did not improve the marginal integrity of class I restorations. Filtek P90 showed lower polymerization shrinkage and shrinkage stress when compared to the experimental and commercial methacrylate composite.

  4. Enhanced intracellular delivery of a model drug using microbubbles produced by a microfluidic device.

    Science.gov (United States)

    Dixon, Adam J; Dhanaliwala, Ali H; Chen, Johnny L; Hossack, John A

    2013-07-01

    Focal drug delivery to a vessel wall facilitated by intravascular ultrasound and microbubbles holds promise as a potential therapy for atherosclerosis. Conventional methods of microbubble administration result in rapid clearance from the bloodstream and significant drug loss. To address these limitations, we evaluated whether drug delivery could be achieved with transiently stable microbubbles produced in real time and in close proximity to the therapeutic site. Rat aortic smooth muscle cells were placed in a flow chamber designed to simulate physiological flow conditions. A flow-focusing microfluidic device produced 8 μm diameter monodisperse microbubbles within the flow chamber, and ultrasound was applied to enhance uptake of a surrogate drug (calcein). Acoustic pressures up to 300 kPa and flow rates up to 18 mL/s were investigated. Microbubbles generated by the flow-focusing microfluidic device were stabilized with a polyethylene glycol-40 stearate shell and had either a perfluorobutane (PFB) or nitrogen gas core. The gas core composition affected stability, with PFB and nitrogen microbubbles exhibiting half-lives of 40.7 and 18.2 s, respectively. Calcein uptake was observed at lower acoustic pressures with nitrogen microbubbles (100 kPa) than with PFB microbubbles (200 kPa) (p 3). In addition, delivery was observed at all flow rates, with maximal delivery (>70% of cells) occurring at a flow rate of 9 mL/s. These results demonstrate the potential of transiently stable microbubbles produced in real time and in close proximity to the intended therapeutic site for enhancing localized drug delivery. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  5. Mathematical modeling of potentially hazardous nuclear objects with time shifts

    International Nuclear Information System (INIS)

    Gharakhanlou, J.; Kazachkov, I.V.

    2012-01-01

    The aggregate models for potentially hazardous objects with time shifts are used for mathematical modeling and computer simulation. The effects of time delays are time forecasts are analyzed. The influence of shift arguments on the nonlinear differential equations is discussed. Computer simulation has established the behavior of potentially hazardous nuclear object

  6. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  7. Market volatility modeling for short time window

    Science.gov (United States)

    de Mattos Neto, Paulo S. G.; Silva, David A.; Ferreira, Tiago A. E.; Cavalcanti, George D. C.

    2011-10-01

    The gain or loss of an investment can be defined by the movement of the market. This movement can be estimated by the difference between the magnitudes of two stock prices in distinct periods and this difference can be used to calculate the volatility of the markets. The volatility characterizes the sensitivity of a market change in the world economy. Traditionally, the probability density function (pdf) of the movement of the markets is analyzed by using power laws. The contributions of this work is two-fold: (i) an analysis of the volatility dynamic of the world market indexes is performed by using a two-year window time data. In this case, the experiments show that the pdf of the volatility is better fitted by exponential function than power laws, in all range of pdf; (ii) after that, we investigate a relationship between the volatility of the markets and the coefficient of the exponential function based on the Maxwell-Boltzmann ideal gas theory. The results show an inverse relationship between the volatility and the coefficient of the exponential function. This information can be used, for example, to predict the future behavior of the markets or to cluster the markets in order to analyze economic patterns.

  8. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  9. Spectrophotometric analysis of tomato plants produced from seeds exposed under space flight conditions for a long time

    Science.gov (United States)

    Nechitailo, Galina S.; Yurov, S.; Cojocaru, A.; Revin, A.

    The analysis of the lycopene and other carotenoids in tomatoes produced from seeds exposed under space flight conditions at the orbital station MIR for six years is presented in this work. Our previous experiments with tomato plants showed the germination of seeds to be 32%Genetic investigations revealed 18%in the experiment and 8%experiments were conducted to study the capacity of various stimulating factors to increase germination of seeds exposed for a long time to the action of space flight factors. An increase of 20%achieved but at the same time mutants having no analogues in the control variants were detected. For the present investigations of the third generation of plants produced from seeds stored for a long time under space flight conditions 80 tomatoes from forty plants were selected. The concentration of lycopene in the experimental specimens was 2.5-3 times higher than in the control variants. The spectrophotometric analysis of ripe tomatoes revealed typical three-peaked carotenoid spectra with a high maximum of lycopene (a medium maximum at 474 nm), a moderate maximum of its predecessor, phytoin, (a medium maximum at 267 nm) and a low maximum of carotenes. In green tomatoes, on the contrary, a high maximum of phytoin, a moderate maximum of lycopene and a low maximum of carotenes were observed. The results of the spectral analysis point to the retardation of biosynthesis of carotenes while the production of lycopene is increased and to the synthesis of lycopene from phytoin. Electric conduction of tomato juice in the experimental samples is increased thus suggesting higher amounts of carotenoids, including lycopene and electrolytes. The higher is the value of electric conduction of a specimen, the higher are the spectral maxima of lycopene. The hydrogen ion exponent of the juice of ripe tomatoes increases due to which the efficiency of ATP biosynthesis in cell mitochondria is likely to increase, too. The results demonstrating an increase in the content

  10. Lichen Parmelia sulcata time response model to environmental elemental availability

    International Nuclear Information System (INIS)

    Reis, M.A.; Alves, L.C.; Freitas, M.C.; Os, B. van; Wolterbeek, H.Th.

    2000-01-01

    Transplants of lichen Parmelia sulcata collected in an area previously identified as non polluted, were placed at six stations, five of which were near Power Plants and the other in an area expected to be a remote station. Together with the lichen transplants, two total deposition collection buckets and an aerosol sampler were installed. Lichens were recollected two every month from each station. At the same time the water collection buckets were replaced by new ones. The aerosol sampler filter was replaced every week, collection being effective only for 10 minutes out of every two hours; in the remote station aerosol filters were replaced only once a month, the collection rate being kept. Each station was run for a period of one year. Both lichens and aerosol filters were analysed by PIXE and INAA at ITN. Total deposition samples were dried under an infrared lamp, and afterwards acid digested and analysed by ICP-MS at the National Geological Survey of The Netherlands. Data for the three types of samples were then produced for a total of 16 elements. In this work we used the data set thus obtained to test a model for the time response of lichen Parmelia sulcata to a new environment. (author)

  11. Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2016-01-01

    Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive...... estimation approach that allows for the parameters of the estimated models to be time varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared daily returns that was previously believed to be the most difficult fact...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....

  12. A new timing model for calculating the intrinsic timing resolution of a scintillator detector

    International Nuclear Information System (INIS)

    Shao Yiping

    2007-01-01

    The coincidence timing resolution is a critical parameter which to a large extent determines the system performance of positron emission tomography (PET). This is particularly true for time-of-flight (TOF) PET that requires an excellent coincidence timing resolution (<<1 ns) in order to significantly improve the image quality. The intrinsic timing resolution is conventionally calculated with a single-exponential timing model that includes two parameters of a scintillator detector: scintillation decay time and total photoelectron yield from the photon-electron conversion. However, this calculation has led to significant errors when the coincidence timing resolution reaches 1 ns or less. In this paper, a bi-exponential timing model is derived and evaluated. The new timing model includes an additional parameter of a scintillator detector: scintillation rise time. The effect of rise time on the timing resolution has been investigated analytically, and the results reveal that the rise time can significantly change the timing resolution of fast scintillators that have short decay time constants. Compared with measured data, the calculations have shown that the new timing model significantly improves the accuracy in the calculation of timing resolutions

  13. Assessment of HRSC Digital Terrain Models Produced for the South Polar Residual Cap

    Science.gov (United States)

    Putri, Alfiah Rizky Diana; Sidiropoulos, Panagiotis; Muller, Jan-Peter

    2017-04-01

    The current Digital Terrain Models available for Mars consist of NASA MOLA (Mars Orbital Laser Altimeter) Digital Terrain Models with an average resolution of 112 m/ pixel (512 pixels/degree) for the polar region. The ESA/DLR High Resolution Stereo Camera is currently orbiting Mars and mapping its surface, 98% with resolution of ≤100 m/pixel and better and 100% at lower resolution [1]. It is possible to produce Digital Terrain Models from HRSC images using various methods. In this study, the method developed on Kim and Muller [2] which uses the VICAR open source program together with photogrammetry sofrware from DLR (Deutschen Zentrums für Luft- und Raumfahrt) with image matching based on the GOTCHA (Gruen-Otto-Chau) algorithm [3]. Digital Terrain Models have been processed over the South Pole with emphasis on areas around South Polar Residual Cap from High Resolution Stereo Camera images [4]. Digital Terrain Models have been produced for 31 orbits out of 149 polar orbits available. This study analyses the quality of the DTMs including an assessment of accuracy of elevations using the MOLA MEGDR (Mission Experiment Gridded Data Records) which has roughly 42 million MOLA PEDR (Precision Experiment Data Records) points between latitudes of 78 o -90 o S. The issues encountered in the production of Digital Terrain Models will be described and the statistical results and assessment method will be presented. The resultant DTMs will be accessible via http://i-Mars.eu/web-GIS References: [1] Neukum, G. et. al, 2004. Mars Express: The Scientific Payload pp. 17-35. [2] Kim, J.-R. and J.-P. Muller. 2009. PSS vol. 57, pp. 2095-2112. [3] Shin, D. and J.-P. Muller. 2012. Pattern Recognition, 45(10), 3795 -3809. [4] Putri, A.R. D., et al., Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B4, 463-469 Acknowledgements: The research leading to these results has received partial funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the

  14. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  15. Rapid and Accurate Identification by Real-Time PCR of Biotoxin-Producing Dinoflagellates from the Family Gymnodiniaceae

    Directory of Open Access Journals (Sweden)

    Kirsty F. Smith

    2014-03-01

    Full Text Available The identification of toxin-producing dinoflagellates for monitoring programmes and bio-compound discovery requires considerable taxonomic expertise. It can also be difficult to morphologically differentiate toxic and non-toxic species or strains. Various molecular methods have been used for dinoflagellate identification and detection, and this study describes the development of eight real-time polymerase chain reaction (PCR assays targeting the large subunit ribosomal RNA (LSU rRNA gene of species from the genera Gymnodinium, Karenia, Karlodinium, and Takayama. Assays proved to be highly specific and sensitive, and the assay for G. catenatum was further developed for quantification in response to a bloom in Manukau Harbour, New Zealand. The assay estimated cell densities from environmental samples as low as 0.07 cells per PCR reaction, which equated to three cells per litre. This assay not only enabled conclusive species identification but also detected the presence of cells below the limit of detection for light microscopy. This study demonstrates the usefulness of real-time PCR as a sensitive and rapid molecular technique for the detection and quantification of micro-algae from environmental samples.

  16. Rapid and accurate identification by real-time PCR of biotoxin-producing dinoflagellates from the family gymnodiniaceae.

    Science.gov (United States)

    Smith, Kirsty F; de Salas, Miguel; Adamson, Janet; Rhodes, Lesley L

    2014-03-07

    The identification of toxin-producing dinoflagellates for monitoring programmes and bio-compound discovery requires considerable taxonomic expertise. It can also be difficult to morphologically differentiate toxic and non-toxic species or strains. Various molecular methods have been used for dinoflagellate identification and detection, and this study describes the development of eight real-time polymerase chain reaction (PCR) assays targeting the large subunit ribosomal RNA (LSU rRNA) gene of species from the genera Gymnodinium, Karenia, Karlodinium, and Takayama. Assays proved to be highly specific and sensitive, and the assay for G. catenatum was further developed for quantification in response to a bloom in Manukau Harbour, New Zealand. The assay estimated cell densities from environmental samples as low as 0.07 cells per PCR reaction, which equated to three cells per litre. This assay not only enabled conclusive species identification but also detected the presence of cells below the limit of detection for light microscopy. This study demonstrates the usefulness of real-time PCR as a sensitive and rapid molecular technique for the detection and quantification of micro-algae from environmental samples.

  17. Modeling non-Gaussian time-varying vector autoregressive process

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a novel and general methodology for modeling time-varying vector autoregressive processes which are widely used in many areas such as modeling of chemical...

  18. Tempered fractional time series model for turbulence in geophysical flows

    Science.gov (United States)

    Meerschaert, Mark M.; Sabzikar, Farzad; Phanikumar, Mantha S.; Zeleke, Aklilu

    2014-09-01

    We propose a new time series model for velocity data in turbulent flows. The new model employs tempered fractional calculus to extend the classical 5/3 spectral model of Kolmogorov. Application to wind speed and water velocity in a large lake are presented, to demonstrate the practical utility of the model.

  19. Tempered fractional time series model for turbulence in geophysical flows

    International Nuclear Information System (INIS)

    Meerschaert, Mark M; Sabzikar, Farzad; Phanikumar, Mantha S; Zeleke, Aklilu

    2014-01-01

    We propose a new time series model for velocity data in turbulent flows. The new model employs tempered fractional calculus to extend the classical 5/3 spectral model of Kolmogorov. Application to wind speed and water velocity in a large lake are presented, to demonstrate the practical utility of the model. (paper)

  20. vector bilinear autoregressive time series model and its superiority ...

    African Journals Online (AJOL)

    In this research, a vector bilinear autoregressive time series model was proposed and used to model three revenue series(. )t ... showed that vector bilinear autoregressive (BIVAR) models provide better estimates than the long embraced linear models. ... order moving average (MA) polynomials on backward shift operator B ...

  1. A model with nonzero rise time for AE signals

    Indian Academy of Sciences (India)

    Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45

    models, which, while retaining these merits, can also incorporate rise time. We present such a model in the following. 2. Proposed model. The decaying sinusoidal model of (1) can be described in terms of communication terminology as the envelope function A0 exp(−αt) amplitude modulating the sinusoidal signal sin 2 f0t.

  2. forecasting with nonlinear time series model: a monte-carlo ...

    African Journals Online (AJOL)

    PUBLICATIONS1

    with nonlinear time series model by comparing the RMSE with the traditional bootstrap and. Monte-Carlo method of forecasting. We use the logistic smooth transition autoregressive. (LSTAR) model as a case study. We first consider a linear model called the AR. (p) model of order p which satisfies the follow- ing linear ...

  3. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  4. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  5. Modelling and Simulation of Asynchronous Real-Time Systems using Timed Rebeca

    Directory of Open Access Journals (Sweden)

    Luca Aceto

    2011-07-01

    Full Text Available In this paper we propose an extension of the Rebeca language that can be used to model distributed and asynchronous systems with timing constraints. We provide the formal semantics of the language using Structural Operational Semantics, and show its expressiveness by means of examples. We developed a tool for automated translation from timed Rebeca to the Erlang language, which provides a first implementation of timed Rebeca. We can use the tool to set the parameters of timed Rebeca models, which represent the environment and component variables, and use McErlang to run multiple simulations for different settings. Timed Rebeca restricts the modeller to a pure asynchronous actor-based paradigm, where the structure of the model represents the service oriented architecture, while the computational model matches the network infrastructure. Simulation is shown to be an effective analysis support, specially where model checking faces almost immediate state explosion in an asynchronous setting.

  6. Selection of ESBL-Producing E. coli in a Mouse Intestinal Colonization Model.

    Science.gov (United States)

    Hertz, Frederik Boëtius; Nielsen, Karen Leth; Frimodt-Møller, Niels

    2018-01-01

    Asymptomatic human carriage of antimicrobially drug-resistant pathogens prior to infection is increasing worldwide. Further investigation into the role of this fecal reservoir is important for combatting the increasing antimicrobial resistance problems. Additionally, the damage on the intestinal microflora due to antimicrobial treatment is still not fully understood. Animal models are powerful tools to investigate bacterial colonization subsequent to antibiotic treatment. In this chapter we present a mouse-intestinal colonization model designed to investigate how antibiotics select for an ESBL-producing E. coli isolate. The model can be used to study how antibiotics with varying effect on the intestinal flora promote the establishment of the multidrug-resistant E. coli. Colonization is successfully investigated by sampling and culturing stool during the days following administration of antibiotics. Following culturing, a precise identification of the bacterial strain found in mice feces is applied to ensure that the isolate found is in fact identical to the strain used for inoculation. For this purpose random amplified of polymorphic DNA (RAPD) PCR specifically developed for E. coli is applied. This method allows us to distinguish E. coli with more than 99.95% genome similarity using a duplex PCR method.

  7. Modeling of X-ray Images and Energy Spectra Produced by Stepping Lightning Leaders

    Science.gov (United States)

    Xu, Wei; Marshall, Robert A.; Celestin, Sebastien; Pasko, Victor P.

    2017-11-01

    Recent ground-based measurements at the International Center for Lightning Research and Testing (ICLRT) have greatly improved our knowledge of the energetics, fluence, and evolution of X-ray emissions during natural cloud-to-ground (CG) and rocket-triggered lightning flashes. In this paper, using Monte Carlo simulations and the response matrix of unshielded detectors in the Thunderstorm Energetic Radiation Array (TERA), we calculate the energy spectra of X-rays as would be detected by TERA and directly compare with the observational data during event MSE 10-01. The good agreement obtained between TERA measurements and theoretical calculations supports the mechanism of X-ray production by thermal runaway electrons during the negative corona flash stage of stepping lightning leaders. Modeling results also suggest that measurements of X-ray bursts can be used to estimate the approximate range of potential drop of lightning leaders. Moreover, the X-ray images produced during the leader stepping process in natural negative CG discharges, including both the evolution and morphological features, are theoretically quantified. We show that the compact emission pattern as recently observed in X-ray images is likely produced by X-rays originating from the source region, and the diffuse emission pattern can be explained by the Compton scattering effects.

  8. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    Science.gov (United States)

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  9. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    Directory of Open Access Journals (Sweden)

    Yuhan Rao

    2015-06-01

    Full Text Available Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM, is proposed to achieve the goal of accurately and efficiently blending MODIS NDVI time-series data and multi-temporal Landsat TM/ETM+ images. This method firstly unmixes the NDVI temporal changes in MODIS time-series to different land cover types and then uses unmixed NDVI temporal changes to predict Landsat-like NDVI dataset. The test over a forest site shows high accuracy (average difference: −0.0070; average absolute difference: 0.0228; and average absolute relative difference: 4.02% and computation efficiency of NDVI-LMGM (31 seconds using a personal computer. Experiments over more complex landscape and long-term time-series demonstrated that NDVI-LMGM performs well in each stage of vegetation growing season and is robust in regions with contrasting spatial and spatial variations. Comparisons between NDVI-LMGM and current methods (i.e., Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM, Enhanced STARFM (ESTARFM and Weighted Linear Model (WLM show that NDVI-LMGM is more accurate and efficient than current methods. The proposed method will benefit land surface process research, which requires a dense NDVI time-series dataset with high spatial resolution.

  10. A time fractional model to represent rainfall process

    Directory of Open Access Journals (Sweden)

    Jacques Golder

    2014-01-01

    Full Text Available This paper deals with a stochastic representation of the rainfall process. The analysis of a rainfall time series shows that cumulative representation of a rainfall time series can be modeled as a non-Gaussian random walk with a log-normal jump distribution and a time-waiting distribution following a tempered α-stable probability law. Based on the random walk model, a fractional Fokker-Planck equation (FFPE with tempered α-stable waiting times was obtained. Through the comparison of observed data and simulated results from the random walk model and FFPE model with tempered á-stable waiting times, it can be concluded that the behavior of the rainfall process is globally reproduced, and the FFPE model with tempered α-stable waiting times is more efficient in reproducing the observed behavior.

  11. Comparison of mid-Pliocene climate predictions produced by the HadAM3 and GCMAM3 General Circulation Models

    Science.gov (United States)

    Haywood, A.M.; Chandler, M.A.; Valdes, P.J.; Salzmann, U.; Lunt, D.J.; Dowsett, H.J.

    2009-01-01

    The mid-Pliocene warm period (ca. 3 to 3.3??million years ago) has become an important interval of time for palaeoclimate modelling exercises, with a large number of studies published during the last decade. However, there has been no attempt to assess the degree of model dependency of the results obtained. Here we present an initial comparison of mid-Pliocene climatologies produced by the Goddard Institute for Space Studies and Hadley Centre for Climate Prediction and Research atmosphere-only General Circulation Models (GCMAM3 and HadAM3). Whilst both models are consistent in the simulation of broad-scale differences in mid-Pliocene surface air temperature and total precipitation rates, significant variation is noted on regional and local scales. There are also significant differences in the model predictions of total cloud cover. A terrestrial data/model comparison, facilitated by the BIOME 4 model and a new data set of Piacenzian Stage land cover [Salzmann, U., Haywood, A.M., Lunt, D.J., Valdes, P.J., Hill, D.J., (2008). A new global biome reconstruction and data model comparison for the Middle Pliocene. Global Ecology and Biogeography 17, 432-447, doi:10.1111/j.1466-8238.2007.00381.x] and combined with the use of Kappa statistics, indicates that HadAM3-based biome predictions provide a closer fit to proxy data in the mid to high-latitudes. However, GCMAM3-based biomes in the tropics provide the closest fit to proxy data. ?? 2008 Elsevier B.V.

  12. Modeling highway travel time distribution with conditional probability models

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  13. The Climatology of Extreme Surge-Producing Extratropical Cyclones in Observations and Models

    Science.gov (United States)

    Catalano, A. J.; Broccoli, A. J.; Kapnick, S. B.

    2016-12-01

    Extreme coastal storms devastate heavily populated areas around the world by producing powerful winds that can create a large storm surge. Both tropical and extratropical cyclones (ETCs) occur over the northwestern Atlantic Ocean, and the risks associated with ETCs can be just as severe as those associated with tropical storms (e.g. high winds, storm surge). At The Battery in New York City, 17 of the 20 largest storm surge events were a consequence of extratropical cyclones (ETCs), which are more prevalent than tropical cyclones in the northeast region of the United States. Therefore, we analyze the climatology of ETCs that are capable of producing a large storm surge along the northeastern coast of the United States. For a historical analysis, water level data was collected from National Oceanic and Atmospheric Administration (NOAA) tide gauges at three separate locations (Sewell's Pt., VA, The Battery, NY, and Boston, MA). We perform a k-means cluster analysis of sea level pressure from the ECMWF 20th Century Reanalysis dataset (ERA-20c) to explore the natural sets of observed storms with similar characteristics. We then composite cluster results with features of atmospheric circulation to observe the influence of interannual and multidecadal variability such as the North Atlantic Oscillation. Since observational records contain a small number of well-documented ETCs, the capability of a high-resolution coupled climate model to realistically simulate such extreme coastal storms will also be assessed. Global climate models provide a means of simulating a much larger sample of extreme events, allowing for better resolution of the tail of the distribution. We employ a tracking algorithm to identify ETCs in a multi-century simulation under present-day conditions. Quantitative comparisons of cyclolysis, cyclogenesis, and cyclone densities of simulated ETCs and storms from recent history (using reanalysis products) are conducted.

  14. Performance evaluation of paper embossing tools produced by fused deposition modelling additive manufacturing technology

    Directory of Open Access Journals (Sweden)

    Gordana Delić

    2017-12-01

    Full Text Available From its beginnings, up to a few years ago, additive manufacturing technology was able to produce models or prototypes which have limited use, because of materials mechanical properties. With advancement and invention of new materials, this is changing. Now, it is possible to create 3D prints that can be used as final products or functional tools, using technology and materials with low environmental impact. The goal of this study was to examine opportunities for production of paper embossing tools by fused deposition modelling (FDM 3D printing. This study emphasises the use of environmentally friendly poly-lactic acid (PLA materials in FDM technology, contrary to the conventional method using metal alloys and acids. Embossing of line elements and letters using 3D printed embossing tools was done on six different types of paper. Embossing force was applied using SHIMADZU EZ-LX Compact Tabletop Testing Machine. Each type of paper was repeatedly embossed using different values of embossing force (in 250 N increments, starting at 1000 N to determine the optimal embossing force for each specific paper type. When determined, the optimal embossing force was used on ten samples for each paper type. Results of embossing were analysed and evaluated. The analysis consisted of investigating the effects of the applied embossing force and characteristics such as paper basis weight, paper structure, surface characteristic and fibre direction of the paper. Results show that paper characteristics determine the embossing force required for achieving a good embossing result. This means that with the right amount of embossing force, letters and borderlines can be equally well formed by the embossing process regardless of paper weight, surface characteristics, etc. Embossing tools produced in this manner can be used in case of the embossing elements that are not complex. The reason for this is the limitation of FDM technology and lack of precision needed for fine

  15. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  16. A stochastic surplus production model in continuous time

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte

    2017-01-01

    surplus production model in continuous time (SPiCT), which in addition to stock dynamics also models the dynamics of the fisheries. This enables error in the catch process to be reflected in the uncertainty of estimated model parameters and management quantities. Benefits of the continuous-time state......Surplus production modelling has a long history as a method for managing data-limited fish stocks. Recent advancements have cast surplus production models as state-space models that separate random variability of stock dynamics from error in observed indices of biomass. We present a stochastic...... and improve estimation of reference points relative to discrete-time analysis of aggregated annual data. Finally, subannual data from five North Sea stocks are analysed with particular focus on using residual analysis to diagnose model insufficiencies and identify necessary model extensions such as robust...

  17. A model for quantification of temperature profiles via germination times

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Adolf, Verena Isabelle; Jacobsen, Sven-Erik

    2013-01-01

    Current methodology to quantify temperature characteristics in germination of seeds is predominantly based on analysis of the time to reach a given germination fraction, that is, the quantiles in the distribution of the germination time of a seed. In practice interpolation between observed...... germination fractions at given monitoring times is used to obtain the time to reach a given germination fraction. As a consequence the obtained value will be highly dependent on the actual monitoring scheme used in the experiment. In this paper a link between currently used quantile models for the germination...... time and a specific type of accelerated failure time models is provided. As a consequence the observed number of germinated seeds at given monitoring times may be analysed directly by a grouped time-to-event model from which characteristics of the temperature profile may be identified and estimated...

  18. Powered bone marrow biopsy procedures produce larger core specimens, with less pain, in less time than with standard manual devices

    Directory of Open Access Journals (Sweden)

    Larry J. Miller

    2011-07-01

    Full Text Available Bone marrow sampling remains essential in the evaluation of hematopoietic and many non-hematopoietic disorders. One common limitation to these procedures is the discomfort experienced by patients. To address whether a Powered biopsy system could reduce discomfort while providing equivalent or better results, we performed a randomized trial in adult volunteers. Twenty-six subjects underwent bilateral biopsies with each device. Core samples were obtained in 66.7% of Manual insertions; 100% of Powered insertions (P=0.002. Initial mean biopsy core lengths were 11.1±4.5 mm for the Manual device; 17.0±6.8 mm for the Powered device (P<0.005. Pathology assessment for the Manual device showed a mean length of 6.1±5.6 mm, width of 1.0±0.7 mm, and volume of 11.0±10.8 mm3. Powered device measurements were mean length of 15.3±6.1 mm, width of 2.0±0.3 mm, and volume of 49.1±21.5 mm3 (P<0.001. The mean time to core ejection was 86 seconds for Manual device; 47 seconds for the Powered device (P<0.001. The mean second look overall pain score was 33.3 for the Manual device; 20.9 for the Powered (P=0.039. We conclude that the Powered biopsy device produces superior sized specimens, with less overall pain, in less time.

  19. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    Science.gov (United States)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2017-11-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  20. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    Science.gov (United States)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  1. Relationships among gas production, end products of rumen fermentation and microbial N produced in vitro at two incubation times

    DEFF Research Database (Denmark)

    Cattani, Mirko; Maccarana, Laura; Hansen, Hanne Helene

    2013-01-01

    This experiment compared linear relationships among end-products of rumen fermentation measured at the time (t½) at which a feed produces half of its asymptotic gas production) or at 48 h. Meadow hay and corn grain were incubated for t½ (16 and 9 h, respectively) or for 48 h into glass bottles....... Each bottle (310 ml) was filled with feed sample (0.5 g) and 75 ml of buffered rumen fluid, and incubated at 39.0°C. Gas production (GP) was measured using the ANKOMRF System, and gas accumulated in headspace of bottles was released at 3.4 kPa. At t½ or 48 h, fermentation fluids were analysed...... at 48 h. At t½, the valerate content in rumen fl uid was negligible. However, relatively large amounts of valerate were measured after 48 h, probably the result of microbial lysis. Results suggest that relationships among end-products of rumen fermentation can be more accurately evaluated at a substrate...

  2. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  3. Long memory of financial time series and hidden Markov models with time-varying parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    Hidden Markov models are often used to capture stylized facts of daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior for the ability to reproduce the stylized...... facts have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time-varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared...... daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step predictions....

  4. Integral-Value Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions on prefere...... on preferences between real- or vector-valued outcomes over continuous time are satisfied if and only if the preferences are represented by a value function having an integral form......Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions...

  5. Theoretical thermal dosimetry produced by an annular phased array system in CT-based patient models

    International Nuclear Information System (INIS)

    Paulsen, K.D.; Strohbehn, J.W.; Lynch, D.R.

    1984-01-01

    Theoretical calculations for the specific absorption rate (SAR) and the resulting temperature distributions produced by an annular phased array (APA) type system are made. The finite element numerical method is used in the formulation of both the electromagnetic (EM) and the thermal boundary value problems. A number of detailed patient models based on CT-scan data from the pelvic, visceral, and thoracic regions are generated to stimulate a variety of tumor locations and surrounding normal tissues. The SAR values from the EM solution are input into the bioheat transfer equation, and steady-rate temperature distributions are calculated for a wide variety of blood flow rates. Based on theoretical modeling, the APA shows no preferential heating of superficial over deep-seated tumors. However, in most cases satisfactory thermal profiles (therapeutic volume near 60%) are obtained in all three regions of the human trunk only for tumors with little or no blood flow. Unsatisfactory temperature patterns (therapeutic volume <50%) are found for tumors with moderate to high perfusion rates. These theoretical calculations should aid the clinician in the evaluation of the effectiveness of APA type devices in heating tumors located in the trunk region

  6. Opioid Mechanism Involvement in the Synergism Produced by the Combination of Diclofenac and Caffeine in the Formalin Model

    OpenAIRE

    Flores-Ramos, Jos? Mar?a; D?az-Reval, M. Irene

    2013-01-01

    Analgesics can be administered in combination with caffeine for improved analgesic effectiveness in a process known as synergism. The mechanisms by which these combinations produce synergism are not yet fully understood. The aim of this study was to analyze whether the administration of diclofenac combined with caffeine produced antinociceptive synergism and whether opioid mechanisms played a role in this event. The formalin model was used to evaluate the antinociception produced by the oral ...

  7. Multi-model Cross Pollination in Time via Data Assimilation

    Science.gov (United States)

    Du, H.; Smith, L. A.

    2015-12-01

    Nonlinear dynamical systems are frequently used to model physical processes including the fluid dynamics, weather and climate. Uncertainty in the observations makes identification of the exact state impossible for a chaotic nonlinear system, this suggests forecasts based on an ensemble of initial conditions to reflect the inescapable uncertainty in the observations. In general, when forecasting real systems the model class from which the particular model equations are drawn does not contain a process that is able to generate trajectories consistent with the data. Multi-model ensembles have become popular tools to account for uncertainties due to observational noise and structural model error in weather and climate simulation-based predictions on time scales from days to seasons and centuries. There have been some promising results suggesting that the multi-model ensemble forecasts outperform the single model forecasts. The current multi-model ensemble forecasts are focused on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently, every single model is likely to contain different local dynamical information from that of other models. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information from each individual model operationally in time. The proposed method generates model states in time via applying advanced nonlinear data assimilation scheme(s) over the multi-model forecasts. The proposed approach is demonstrated to outperform the traditional statistically post-processing in the 40-dimensional Lorenz96 flow. It is suggested that this illustration could form the basis for more general results which

  8. Characterization of Models for Time-Dependent Behavior of Soils

    DEFF Research Database (Denmark)

    Liingaard, Morten; Augustesen, Anders; Lade, Poul V.

    2004-01-01

    developed for metals and steel but are, to some extent, used to characterize time effects in geomaterials. The third part is a review of constitutive laws that describe not only viscous effects but also the inviscid ( rate-independent) behavior of soils, in principle, under any possible loading condition......  Different classes of constitutive models have been developed to capture the time-dependent viscous phenomena ~ creep, stress relaxation, and rate effects ! observed in soils. Models based on empirical, rheological, and general stress-strain-time concepts have been studied. The first part....... Special attention is paid to elastoviscoplastic models that combine inviscid elastic and time-dependent plastic behavior. Various general elastoviscoplastic models can roughly be divided into two categories: Models based on the concept of overstress and models based on nonstationary flow surface theory...

  9. Evaluation of Digital Model Accuracy and Time-dependent ...

    African Journals Online (AJOL)

    2017-10-26

    Oct 26, 2017 ... Objectives: The aim of this study was to evaluate the accuracy of digital models produced with the three-dimensional dental scanner, and to test the dimensional stability of alginate impressions for durations of immediately (T0), 1 day (T1), and 2 days (T2). Materials and Methods: A total of sixty impressions ...

  10. Fundamental State Space Time Series Models for JEPX Electricity Prices

    Science.gov (United States)

    Ofuji, Kenta; Kanemoto, Shigeru

    Time series models are popular in attempts to model and forecast price dynamics in various markets. In this paper, we have formulated two state space models and tested them for its applicability to power price modeling and forecasting using JEPX (Japan Electric Power eXchange) data. The state space models generally have a high degree of flexibility with its time-dependent state transition matrix and system equation configurations. Based on empirical data analysis and past literatures, we used calculation assumptions to a) extract stochastic trend component to capture non-stationarity, and b) detect structural changes underlying in the market. The stepwise calculation algorithm followed that of Kalman Filter. We then evaluated the two models' forecasting capabilities, in comparison with ordinary AR (autoregressive) and ARCH (autoregressive conditional heteroskedasticity) models. By choosing proper explanatory variables, the latter state space model yielded as good a forecasting capability as that of the AR and the ARCH models for a short forecasting horizon.

  11. Support for the Logical Execution Time Model on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Kluge, Florian; Schoeberl, Martin; Ungerer, Theo

    2016-01-01

    The logical execution time (LET) model increases the compositionality of real-time task sets. Removal or addition of tasks does not influence the communication behavior of other tasks. In this work, we extend a multicore operating system running on a time-predictable multicore processor to support...... the LET model. For communication between tasks we use message passing on a time-predictable network-on-chip to avoid the bottleneck of shared memory. We report our experiences and present results on the costs in terms of memory and execution time....

  12. SEASONAL AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODEL FOR PRECIPITATION TIME SERIES

    OpenAIRE

    Yan Wang; Meng Gao; Xinghua Chang; Xiyong Hou

    2012-01-01

    Predicting the trend of precipitation is a difficult task in meteorology and environmental sciences. Statistical approaches from time series analysis provide an alternative way for precipitation prediction. The ARIMA model incorporating seasonal characteristics, which is referred to as seasonal ARIMA model was presented. The time series data is the monthly precipitation data in Yantai, China and the period is from 1961 to 2011. The model was denoted as SARIMA (1, 0, 1) (0, 1, 1)12 in this stu...

  13. Networks maximizing the consensus time of voter models

    Science.gov (United States)

    Iwamasa, Yuni; Masuda, Naoki

    2014-07-01

    We explore the networks that yield the largest mean consensus time of voter models under different update rules. By analytical and numerical means, we show that the so-called lollipop graph, barbell graph, and double-star graph maximize the mean consensus time under the update rules called the link dynamics, voter model, and invasion process, respectively. For each update rule, the largest mean consensus time scales as O (N3), where N is the number of nodes in the network.

  14. Real time wave forecasting using wind time history and numerical model

    Science.gov (United States)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  15. Modeling Real-Time Human-Automation Collaborative Scheduling of Unmanned Vehicles

    Science.gov (United States)

    2013-06-01

    retrieval, and motor actions. While formal cognitive models can produce accurate predictions of the time that an expert user would take to interact... Stirling , Scotland. Beck, H. P., Dzindolet, M. T., & Pierce, L. G. (2005). Take the Advice of a Decision Aid: I’d Rather be Wrong! Paper presented at the

  16. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana

    2013-05-19

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  17. Vector bilinear autoregressive time series model and its superiority ...

    African Journals Online (AJOL)

    In this research, a vector bilinear autoregressive time series model was proposed and used to model three revenue series (X1, X2, X3) . The “orders” of the three series were identified on the basis of the distribution of autocorrelation and partial autocorrelation functions and were used to construct the vector bilinear models.

  18. A model with nonzero rise time for AE signals

    Indian Academy of Sciences (India)

    Acoustic emission (AE) signals are conventionally modelled as damped or decaying sinusoidal functions. A major drawback of this model is its negligible or zero rise time. This paper proposes an alternative model, which provides for the rising part of the signal without sacrificing the analytical tractability and simplicity of the ...

  19. Global travel time tomography with 3-D reference models

    NARCIS (Netherlands)

    Amaru, M.L.

    2007-01-01

    In this study, a global high-resolution P-wave velocity model is obtained for the Earth's crust and mantle using travel time tomography. Improvements to previous models are achieved by incorporating additional data and advancing the method to use 3-D reference models. The newly compiled data set

  20. Bone invading NSCLC cells produce IL-7: mice model and human histologic data

    International Nuclear Information System (INIS)

    Roato, Ilaria; Mussa, Antonio; Ferracini, Riccardo; Caldo, Davide; Godio, Laura; D'Amico, Lucia; Giannoni, Paolo; Morello, Emanuela; Quarto, Rodolfo; Molfetta, Luigi; Buracco, Paolo

    2010-01-01

    Bone metastases are a common and dismal consequence of lung cancer that is a leading cause of death. The role of IL-7 in promoting bone metastases has been previously investigated in NSCLC, but many aspects remain to be disclosed. To further study IL-7 function in bone metastasis, we developed a human-in-mice model of bone aggression by NSCLC and analyzed human bone metastasis biopsies. We used NOD/SCID mice implanted with human bone. After bone engraftment, two groups of mice were injected subcutaneously with A549, a human NSCLC cell line, either close or at the contralateral flank to the human bone implant, while a third control group did not receive cancer cells. Tumor and bone vitality and IL-7 expression were assessed in implanted bone, affected or not by A549. Serum IL-7 levels were evaluated by ELISA. IL-7 immunohistochemistry was performed on 10 human bone NSCLC metastasis biopsies for comparison. At 12 weeks after bone implant, we observed osteogenic activity and neovascularization, confirming bone vitality. Tumor aggressive cells implanted close to human bone invaded the bone tissue. The bone-aggressive cancer cells were positive for IL-7 staining both in the mice model and in human biopsies. Higher IL-7 serum levels were found in mice injected with A549 cells close to the bone implant compared to mice injected with A549 cells in the flank opposite to the bone implant. We demonstrated that bone-invading cells express and produce IL-7, which is known to promote osteoclast activation and osteolytic lesions. Tumor-bone interaction increases IL-7 production, with an increase in IL-7 serum levels. The presented mice model of bone invasion by contiguous tumor is suitable to study bone-tumor cell interaction. IL-7 plays a role in the first steps of metastatic process

  1. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; van der Heijden, M.C.

    1997-01-01

    In many multi-echelon inventory systems, the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process that generates such lead times is usually not well

  2. A Data Model for Determining Weather's Impact on Travel Time

    DEFF Research Database (Denmark)

    Andersen, Ove; Torp, Kristian

    2016-01-01

    Accurate estimating travel times in road networks is a complex task because travel times depends on factors such as the weather. In this paper, we present a generic model for integrating weather data with GPS data to improve the accuracy of the estimated travel times. First, we present a data mod...

  3. Combined Forecasts from Linear and Nonlinear Time Series Models

    NARCIS (Netherlands)

    N. Terui (Nobuhiko); H.K. van Dijk (Herman)

    1999-01-01

    textabstractCombined forecasts from a linear and a nonlinear model are investigated for time series with possibly nonlinear characteristics. The forecasts are combined by a constant coefficient regression method as well as a time varying method. The time varying method allows for a locally

  4. Combined forecasts from linear and nonlinear time series models

    NARCIS (Netherlands)

    N. Terui (Nobuhiko); H.K. van Dijk (Herman)

    1999-01-01

    textabstractCombined forecasts from a linear and a nonlinear model are investigated for time series with possibly nonlinear characteristics. The forecasts are combined by a constant coefficient regression method as well as a time varying method. The time varying method allows for a locally

  5. Space-time with a fluctuating metric tensor model

    International Nuclear Information System (INIS)

    Morozov, A N

    2016-01-01

    A presented physical time model is based on the assumption that time is a random Poisson process, the intensity of which depends on natural irreversible processes. The introduction of metric tensor space-time fluctuations allowing describing the impact of stochastic gravitational background has been demonstrated. The use of spectral lines broadening measurement for the registration of relic gravitational waves has been suggested. (paper)

  6. Investigation of radiopharmaceuticals from cyclotron produced radionuclides and development of mathematical models. Part of a coordinated programme on production of radiopharmaceuticals from accelerator-produced isotopes

    International Nuclear Information System (INIS)

    Slaus, I.

    1983-04-01

    Several radioisotopes for diagnostic uses in nuclear medicine studies are produced using the internal 15 MeV (30 MeV alphas) deuteron beam of the ''Ruder Boskovic'' Institute in Zagreb, Yugoslavia. Some of the most important radioisotopes produced during the last few years are: Gallium-67 (d, xn reaction on a Cu/Ni/Zn target) with yield of 7.6 MBq/uAh, 81 Rb-sup(81m)Kr generator (α, 2n reaction on a Cu/Cu 2 Br 2 target) with a yield of 99 MBq/uAh, Iodine-123 (α, 2n reaction on a Cu/Ag/Sb target) with a yield of 6.3 MBq/uAh, and Indium-111 (α, 2n reaction on a Cu/Cu/Ag target) with a yield of 7.2 MBq/uAh. In addition, a simple mathematical lung model for regional ventilation measurements was developed and used for ventilation studies on normal subjects and subjects with various lung diseases. Based on these studies, a more sophisticated and quantitative lung ventilation model for radioactive tracer tidal breathing was further developed. In this new model, the periodicity of breathing is completely taken into account, and it makes possible to actually determine lung ventilation and volume parameters. The model is experimentally verified on healthy subjects, and the value of the effective specific ventilation obtained is in agreement with comparable parameters in the literature. sup(81m)Kr from a generator was used to perform these experimental studies

  7. Recovering Old Stereoscopic Negatives and Producing Digital 3d Models of Former Appearances of Historic Buildings

    Science.gov (United States)

    Rodríguez Miranda, Á.; Valle Melón, J. M.

    2017-02-01

    Three-dimensional models with photographic textures have become a usual product for the study and dissemination of elements of heritage. The interest for cultural heritage also includes evolution along time; therefore, apart from the 3D models of the current state, it is interesting to be able to generate models representing how they were in the past. To that end, it is necessary to resort to archive information corresponding to the moments that we want to visualize. This text analyses the possibilities of generating 3D models of surfaces with photographic textures from old collections of analog negatives coming from works of terrestrial stereoscopic photogrammetry of historic buildings. The case studies presented refer to the geometric documentation of a small hermitage (done in 1996) and two sections of a wall (year 2000). The procedure starts with the digitization of the film negatives and the processing of the images generated, after which a combination of different methods for 3D reconstruction and texture wrapping are applied: techniques working simultaneously with several images (such as the algorithms of Structure from Motion - SfM) and single image techniques (such as the reconstruction based on vanishing points). Then, the features of the obtained models are described according to the geometric accuracy, completeness and aesthetic quality. In this way, it is possible to establish the real applicability of the models in order to be useful for the aforementioned historical studies and dissemination purposes. The text also wants to draw attention to the importance of preserving the documentary heritage available in the collections of negatives in archival custody and to the increasing difficulty of using them due to: (1) problems of access and physical conservation, (2) obsolescence of the equipment for scanning and stereoplotting and (3) the fact that the software for processing digitized photographs is discontinued.

  8. MCDIRC: A model to estimate creep produced by microcracking around a shaft in intact rock

    International Nuclear Information System (INIS)

    Wilkins, B.J.S.; Rigby, G.L.

    1989-12-01

    Atomic Energy of Canada Limited (AECL) is studying the concept of disposing of nuclear fuel waste in a vault in plutonic rock. Models are being developed to predict the mechanical behaviour of the rock in response to excavation and heat from the waste. The dominant mechanism of deformation at temperatures below 150 degrees C is microcracking, which results in rock creep and a decrease in rock strength. A model has been constructed to consider the perturbation of the stress state of intact rock by a vertical cylindrical opening. Slow crack-growth data are used to estimate time-dependent changes in rock strength, from which the movement (creep) of the opening wall and radial strain in the rock mass can be estimated

  9. Bayesian dynamic modeling of time series of dengue disease case counts.

    Science.gov (United States)

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful

  10. Bayesian dynamic modeling of time series of dengue disease case counts.

    Directory of Open Access Journals (Sweden)

    Daniel Adyro Martínez-Bello

    2017-07-01

    Full Text Available The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease

  11. Modelling and Formal Verification of Timing Aspects in Large PLC Programs

    CERN Document Server

    Fernandez Adiego, B; Blanco Vinuela, E; Tournier, J-C; Gonzalez Suarez, V M; Blech, J O

    2014-01-01

    One of the main obstacle that prevents model checking from being widely used in industrial control systems is the complexity of building formal models out of PLC programs, especially when timing aspects need to be integrated. This paper brings an answer to this obstacle by proposing a methodology to model and verify timing aspects of PLC programs. Two approaches are proposed to allow the users to balance the trade-off between the complexity of the model, i.e. its number of states, and the set of specifications possible to be verified. A tool supporting the methodology which allows to produce models for different model checkers directly from PLC programs has been developed. Verification of timing aspects for real-life PLC programs are presented in this paper using NuSMV.

  12. A Framework for Relating Timed Transition Systems and Preserving TCTL Model Checking

    DEFF Research Database (Denmark)

    Jacobsen, Lasse; Jacobsen, Morten; Møller, Mikael Harkjær

    2010-01-01

    Many formal translations between time dependent models have been proposed over the years. While some of them produce timed bisimilar models, others preserve only reachability or (weak) trace equivalence. We suggest a general framework for arguing when a translation preserves Timed Computation Tree...... Logic (TCTL) or its safety fragment.The framework works at the level of timed transition systems, making it independent of the modeling formalisms and applicable to many of the translations published in the literature. Finally, we present a novel translation from extended Timed-Arc Petri Nets...... to Networks of Timed Automata and using the framework argue that itpreserves the full TCTL. The translation has been implemented in the verification tool TAPAAL....

  13. 'Talking about my experiences … at times disturbing yet positive': Producing narratives with people living with dementia.

    Science.gov (United States)

    Benbow, Susan M; Kingston, Paul

    2016-09-01

    This research investigated narrative production and use with families living with dementia. We hypothesised that the process of narrative production would be beneficial to people with dementia and carers, and would elicit important learning for health and social care professionals. Through third sector partners, we recruited community-dwelling people with dementia and carers who consented to develop written, audiotaped or videotaped narratives. Audio-taped narratives were transcribed verbatim and handwritten narratives word-processed. After checking by participants, completed narratives were analysed thematically using qualitative data analysis computer software. A summary of the analysis was circulated to participants, inviting feedback: the analysis was then reviewed. A feedback questionnaire was subsequently circulated to participants, and responses were analysed thematically. Twenty-one carers and 20 people with dementia participated in the project. Four themes of support were identified: 'relationships', 'services', 'prior experience of coping' and having an 'explanation for the dementia'. Three themes were identified as possible additional stresses: 'emotions', 'physical health' and 'identity'. We suggest a model incorporating all these themes, which appeared to contribute to three further themes; 'experience of dementia', 'approaches to coping' and 'looking to the future'. In participant feedback, the main themes identified were 'emotions', 'putting things in perspective', 'sharing or not sharing the narrative' and 'actions resulting'. Producing a narrative is a valuable and engaging experience for people with dementia and carers, and is likely to contribute to the quality of dementia care. Further research is needed to establish how narrative production could be incorporated into routine practice. © The Author(s) 2014.

  14. Models for Pooled Time-Series Cross-Section Data

    Directory of Open Access Journals (Sweden)

    Lawrence E Raffalovich

    2015-07-01

    Full Text Available Several models are available for the analysis of pooled time-series cross-section (TSCS data, defined as “repeated observations on fixed units” (Beck and Katz 1995. In this paper, we run the following models: (1 a completely pooled model, (2 fixed effects models, and (3 multi-level/hierarchical linear models. To illustrate these models, we use a Generalized Least Squares (GLS estimator with cross-section weights and panel-corrected standard errors (with EViews 8 on the cross-national homicide trends data of forty countries from 1950 to 2005, which we source from published research (Messner et al. 2011. We describe and discuss the similarities and differences between the models, and what information each can contribute to help answer substantive research questions. We conclude with a discussion of how the models we present may help to mitigate validity threats inherent in pooled time-series cross-section data analysis.

  15. New analytic results for speciation times in neutral models.

    Science.gov (United States)

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  16. Time representation in reinforcement learning models of the basal ganglia

    Directory of Open Access Journals (Sweden)

    Samuel Joseph Gershman

    2014-01-01

    Full Text Available Reinforcement learning models have been influential in understanding many aspects of basal ganglia function, from reward prediction to action selection. Time plays an important role in these models, but there is still no theoretical consensus about what kind of time representation is used by the basal ganglia. We review several theoretical accounts and their supporting evidence. We then discuss the relationship between reinforcement learning models and the timing mechanisms that have been attributed to the basal ganglia. We hypothesize that a single computational system may underlie both reinforcement learning and interval timing—the perception of duration in the range of seconds to hours. This hypothesis, which extends earlier models by incorporating a time-sensitive action selection mechanism, may have important implications for understanding disorders like Parkinson's disease in which both decision making and timing are impaired.

  17. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    Science.gov (United States)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  18. Volt-time characteristics of small airgaps with Hyperbolic model

    Energy Technology Data Exchange (ETDEWEB)

    Venkatesan, S. [Cardiff University, School of Engineering, Queens Building, The Parade, Cardiff, Wales CF24 3AA (United Kingdom); Usa, S. [Cardiff University, School of Engineering, Queens Building, The Parade, Cardiff, Wales CF24 3AA (United Kingdom); Division of High Voltage Engineering, Anna University, Chennai - 600025 (India)

    2010-07-15

    An experimental investigation of the volt-time characteristics of small airgaps is performed. A Hyperbolic model is proposed to account for the results. The constants of the model have a direct bearing on parameters of the Disruptive Effect model for breakdown with non-standard Lightning Impulses (LIs). Analyses with uniform and non-uniform electrodes show their effect on the Hyperbolic model parameters. (author)

  19. Numerical modeling for saturated-zone groundwater travel time analysis at Yucca Mountain

    International Nuclear Information System (INIS)

    Arnold, B.W.; Barr, G.E.

    1996-01-01

    A three-dimensional, site-scale numerical model of groundwater flow in the saturated zone at Yucca Mountain was constructed and linked to particle tracking simulations to produce an estimate of the distribution of groundwater travel times from the potential repository to the boundary of the accessible environment. This effort and associated modeling of groundwater travel times in the unsaturated zone were undertaken to aid in the evaluation of compliance of the site with 10CFR960. These regulations stipulate that pre-waste-emplacement groundwater travel time to the accessible environment shall exceed 1,000 years along any path of likely and significant radionuclide travel

  20. Time-symmetric universe model and its observational implication

    Energy Technology Data Exchange (ETDEWEB)

    Futamase, T.; Matsuda, T.

    1987-08-01

    A time-symmetric closed-universe model is discussed in terms of the radiation arrow of time. The time symmetry requires the occurrence of advanced waves in the recontracting phase of the Universe. We consider the observational consequences of such advanced waves, and it is shown that a test observer in the expanding phase can observe a time-reversed image of a source of radiation in the future recontracting phase.

  1. A time-symmetric Universe model and its observational implication

    International Nuclear Information System (INIS)

    Futamase, T.; Matsuda, T.

    1987-01-01

    A time-symmetric closed-universe model is discussed in terms of the radiation arrow of time. The time symmetry requires the occurrence of advanced waves in the recontracting phase of the Universe. The observational consequences of such advanced waves are considered, and it is shown that a test observer in the expanding phase can observe a time-reversed image of a source of radiation in the future recontracting phase

  2. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    Science.gov (United States)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  3. Model Checking Timed Automata with Priorities using DBM Subtraction

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Pettersson, Paul

    2006-01-01

    In this paper we describe an extension of timed automata with priorities, and efficient algorithms to compute subtraction on DBMs (difference bounded matrices), needed in symbolic model-checking of timed automata with priorities. The subtraction is one of the few operations on DBMs that result in...... this number affects the performance of symbolic model-checking. The uses of the DBM subtraction operation extend beyond timed automata with priorities. It is also useful for allowing guards on transitions with urgent actions, deadlock checking, and timed games.......In this paper we describe an extension of timed automata with priorities, and efficient algorithms to compute subtraction on DBMs (difference bounded matrices), needed in symbolic model-checking of timed automata with priorities. The subtraction is one of the few operations on DBMs that result...

  4. Continuous time modelling with individually varying time intervals for oscillating and non-oscillating processes.

    Science.gov (United States)

    Voelkle, Manuel C; Oud, Johan H L

    2013-02-01

    When designing longitudinal studies, researchers often aim at equal intervals. In practice, however, this goal is hardly ever met, with different time intervals between assessment waves and different time intervals between individuals being more the rule than the exception. One of the reasons for the introduction of continuous time models by means of structural equation modelling has been to deal with irregularly spaced assessment waves (e.g., Oud & Delsing, 2010). In the present paper we extend the approach to individually varying time intervals for oscillating and non-oscillating processes. In addition, we show not only that equal intervals are unnecessary but also that it can be advantageous to use unequal sampling intervals, in particular when the sampling rate is low. Two examples are provided to support our arguments. In the first example we compare a continuous time model of a bivariate coupled process with varying time intervals to a standard discrete time model to illustrate the importance of accounting for the exact time intervals. In the second example the effect of different sampling intervals on estimating a damped linear oscillator is investigated by means of a Monte Carlo simulation. We conclude that it is important to account for individually varying time intervals, and encourage researchers to conceive of longitudinal studies with different time intervals within and between individuals as an opportunity rather than a problem. © 2012 The British Psychological Society.

  5. Model-Checking Real-Time Control Programs

    DEFF Research Database (Denmark)

    Iversen, T. K.; Kristoffersen, K. J.; Larsen, Kim Guldstrand

    2000-01-01

    In this paper, we present a method for automatic verification of real-time control programs running on LEGO(R) RCX(TM) bricks using the verification tool UPPALL. The control programs, consisting of a number of tasks running concurrently, are automatically translated into the mixed automata model...... of UPPAAL. The fixed scheduling algorithm used by the LEGO(R) RCX(TM) processor is modeled in UPPALL, and supply of similar (sufficient) timed automata models for the environment allows analysis of the overall real-time system using the tools of UPPALL. To illustrate our technique for sorting LEGO(R) bricks...

  6. A Time-Frequency Auditory Model Using Wavelet Packets

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    1996-01-01

    A time-frequency auditory model is presented. The model uses the wavelet packet analysis as the preprocessor. The auditory filters are modelled by the rounded exponential filters, and the excitation is smoothed by a window function. By comparing time-frequency excitation patterns it is shown...... that the change in the time-frequency excitation pattern introduced when a test tone at masked threshold is added to the masker is approximately equal to 7 dB for all types of maskers. The classic detection ratio therefore overrates the detection efficiency of the auditory system....

  7. A composite model of the space-time and 'colors'

    International Nuclear Information System (INIS)

    Terazawa, Hidezumi.

    1987-03-01

    A pregeometric and pregauge model of the space-time and ''colors'' in which the space-time metric and ''color'' gauge fields are both composite is presented. By the non-triviality of the model, the number of space-time dimensions is restricted to be not larger than the number of ''colors''. The long conjectured space-color correspondence is realized in the model action of the Nambu-Goto type which is invariant under both general-coordinate and local-gauge transformations. (author)

  8. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  9. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  10. Dynamic properties of the Solow model with bounded technological progress and time-to-build technology.

    Science.gov (United States)

    Guerrini, Luca; Sodini, Mauro

    2014-01-01

    We introduce a time-to-build technology in a Solow model with bounded technological progress. Our analysis shows that the system may be asymptotically stable, or it can produce stability switches and Hopf bifurcations when time delay varies. The direction and the stability criteria of the bifurcating periodic solutions are obtained by the normal form theory and the center manifold theorem. Numerical simulations confirms the theoretical results.

  11. Fluid pressure arrival time tomography: Estimation and assessment in the presence of inequality constraints, with an application to a producing gas field at Krechba, Algeria

    Energy Technology Data Exchange (ETDEWEB)

    Rucci, A.; Vasco, D.W.; Novali, F.

    2010-04-01

    Deformation in the overburden proves useful in deducing spatial and temporal changes in the volume of a producing reservoir. Based upon these changes we estimate diffusive travel times associated with the transient flow due to production, and then, as the solution of a linear inverse problem, the effective permeability of the reservoir. An advantage an approach based upon travel times, as opposed to one based upon the amplitude of surface deformation, is that it is much less sensitive to the exact geomechanical properties of the reservoir and overburden. Inequalities constrain the inversion, under the assumption that the fluid production only results in pore volume decreases within the reservoir. We apply the formulation to satellite-based estimates of deformation in the material overlying a thin gas production zone at the Krechba field in Algeria. The peak displacement after three years of gas production is approximately 0.5 cm, overlying the eastern margin of the anticlinal structure defining the gas field. Using data from 15 irregularly-spaced images of range change, we calculate the diffusive travel times associated with the startup of a gas production well. The inequality constraints are incorporated into the estimates of model parameter resolution and covariance, improving the resolution by roughly 30 to 40%.

  12. A martingale analysis of first passage times of time-dependent Wiener diffusion models.

    Science.gov (United States)

    Srivastava, Vaibhav; Feng, Samuel F; Cohen, Jonathan D; Leonard, Naomi Ehrich; Shenhav, Amitai

    2017-04-01

    Research in psychology and neuroscience has successfully modeled decision making as a process of noisy evidence accumulation to a decision bound. While there are several variants and implementations of this idea, the majority of these models make use of a noisy accumulation between two absorbing boundaries. A common assumption of these models is that decision parameters, e.g., the rate of accumulation (drift rate), remain fixed over the course of a decision, allowing the derivation of analytic formulas for the probabilities of hitting the upper or lower decision threshold, and the mean decision time. There is reason to believe, however, that many types of behavior would be better described by a model in which the parameters were allowed to vary over the course of the decision process. In this paper, we use martingale theory to derive formulas for the mean decision time, hitting probabilities, and first passage time (FPT) densities of a Wiener process with time-varying drift between two time-varying absorbing boundaries. This model was first studied by Ratcliff (1980) in the two-stage form, and here we consider the same model for an arbitrary number of stages (i.e. intervals of time during which parameters are constant). Our calculations enable direct computation of mean decision times and hitting probabilities for the associated multistage process. We also provide a review of how martingale theory may be used to analyze similar models employing Wiener processes by re-deriving some classical results. In concert with a variety of numerical tools already available, the current derivations should encourage mathematical analysis of more complex models of decision making with time-varying evidence.

  13. Time Aquatic Resources Modeling and Analysis Program (STARMAP)

    Data.gov (United States)

    Federal Laboratory Consortium — Colorado State University has received funding from the U.S. Environmental Protection Agency (EPA) for its Space-Time Aquatic Resources Modeling and Analysis Program...

  14. Periodic Solutions for a Delayed Population Model on Time Scales

    OpenAIRE

    Kejun Zhuang; Zhaohui Wen

    2010-01-01

    This paper deals with a delayed single population model on time scales. With the assistance of coincidence degree theory, sufficient conditions for existence of periodic solutions are obtained. Furthermore, the better estimations for bounds of periodic solutions are established.

  15. Linear Time Invariant Models for Integrated Flight and Rotor Control

    Science.gov (United States)

    Olcer, Fahri Ersel

    2011-12-01

    Recent developments on individual blade control (IBC) and physics based reduced order models of various on-blade control (OBC) actuation concepts are opening up opportunities to explore innovative rotor control strategies for improved rotor aerodynamic performance, reduced vibration and BVI noise, and improved rotor stability, etc. Further, recent developments in computationally efficient algorithms for the extraction of Linear Time Invariant (LTI) models are providing a convenient framework for exploring integrated flight and rotor control, while accounting for the important couplings that exist between body and low frequency rotor response and high frequency rotor response. Formulation of linear time invariant (LTI) models of a nonlinear system about a periodic equilibrium using the harmonic domain representation of LTI model states has been studied in the literature. This thesis presents an alternative method and a computationally efficient scheme for implementation of the developed method for extraction of linear time invariant (LTI) models from a helicopter nonlinear model in forward flight. The fidelity of the extracted LTI models is evaluated using response comparisons between the extracted LTI models and the nonlinear model in both time and frequency domains. Moreover, the fidelity of stability properties is studied through the eigenvalue and eigenvector comparisons between LTI and LTP models by making use of the Floquet Transition Matrix. For time domain evaluations, individual blade control (IBC) and On-Blade Control (OBC) inputs that have been tried in the literature for vibration and noise control studies are used. For frequency domain evaluations, frequency sweep inputs are used to obtain frequency responses of fixed system hub loads to a single blade IBC input. The evaluation results demonstrate the fidelity of the extracted LTI models, and thus, establish the validity of the LTI model extraction process for use in integrated flight and rotor control

  16. Residence time modeling of hot melt extrusion processes.

    Science.gov (United States)

    Reitz, Elena; Podhaisky, Helmut; Ely, David; Thommes, Markus

    2013-11-01

    The hot melt extrusion process is a widespread technique to mix viscous melts. The residence time of material in the process frequently determines the product properties. An experimental setup and a corresponding mathematical model were developed to evaluate residence time and residence time distribution in twin screw extrusion processes. The extrusion process was modeled as the convolution of a mass transport process described by a Gaussian probability function, and a mixing process represented by an exponential function. The residence time of the extrusion process was determined by introducing a tracer at the extruder inlet and measuring the tracer concentration at the die. These concentrations were fitted to the residence time model, and an adequate correlation was found. Different parameters were derived to characterize the extrusion process including the dead time, the apparent mixing volume, and a transport related axial mixing. A 2(3) design of experiments was performed to evaluate the effect of powder feed rate, screw speed, and melt viscosity of the material on the residence time. All three parameters affect the residence time of material in the extruder. In conclusion, a residence time model was developed to interpret experimental data and to get insights into the hot melt extrusion process. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Real-time advanced nuclear reactor core model

    International Nuclear Information System (INIS)

    Koclas, J.; Friedman, F.; Paquette, C.; Vivier, P.

    1990-01-01

    The paper describes a multi-nodal advanced nuclear reactor core model. The model is based on application of modern equivalence theory to the solution of neutron diffusion equation in real time employing the finite differences method. The use of equivalence theory allows the application of the finite differences method to cores divided into hundreds of nodes, as opposed to the much finer divisions (in the order of ten thousands of nodes) where the unmodified method is currently applied. As a result the model can be used for modelling of the core kinetics for real time full scope training simulators. Results of benchmarks, validate the basic assumptions of the model and its applicability to real-time simulation. (orig./HP)

  18. Train Dwell Time Models for Rail Passenger Service

    Directory of Open Access Journals (Sweden)

    San Hor Peay

    2016-01-01

    Full Text Available In recent years, more studies had been conducted about train dwell time as it is a key parameter of rail system performance and reliability. This paper draws an overview of train dwell time models for rail passenger service from various continents, namely Asia, North America, Europe and Australia. The factors affecting train dwell time are identified and analysed across some rail network operators. The dwell time models developed by various researches are also discussed and reviewed. Finally, the contributions from the outcomes of these models are briefly addressed. In conclusion, this paper suggests that there is a need to further study the factors with strong influence upon dwell time to improve the quality of the train services.

  19. Time domain modeling of tunable response of graphene

    DEFF Research Database (Denmark)

    Prokopeva, Ludmila; Emani, Naresh K.; Boltasseva, Alexandra

    2013-01-01

    We present a causal numerical model for time domain simulations of the optical response of graphene. The dielectric function is approximated with a conductivity term, a Drude term and a number of the critical points terms.......We present a causal numerical model for time domain simulations of the optical response of graphene. The dielectric function is approximated with a conductivity term, a Drude term and a number of the critical points terms....

  20. An Examination of Models of Relaxation in Complex Systems. I. Continuous Time Random Walk (CTRW) Models.

    Science.gov (United States)

    1986-02-04

    M NRL Memorandum Report 5719 An Examination of Models of Relaxation in Complex Systems 1. Continuous Time Random Walk ( CTRW ) Models K. L. NGAI, R. W...Examination of Models of Relaxation in Complex Systems I. Continuous Time Random Walk ( CTRW ) Models E. PSRSONAL AUTHOR(S) Ntgi, K.L., Rendell. R.W...necessary and idenrify by block number) Models of relaxation in complex systemL based on the continuous time random walk ( CTRW ) formalism are examined on

  1. Modeling transient luminous events produced by cloud to ground lightning and narrow bipolar pulses: detailed spectra and chemical impact

    Science.gov (United States)

    Perez-Invernon, F. J.; Luque, A.; Gordillo-Vazquez, F. J.

    2017-12-01

    The electromagnetic field generated by lightning discharges can produce Transient Luminous Events (TLEs) in the lower ionosphere, as previously investigated by many authors. Some recent studies suggest that narrow bipolar pulses (NBP), an impulsive and not well-established type of atmospheric electrical discharge, could also produce TLEs. The characterization and observation of such TLEs could be a source of information about the physics underlying NBP. In this work, we develop two different electrodynamical models to study the impact of lightning-driven electromagnetic fields in the lower ionosphere. The first model calculates the quasi-electrostatic field produced by a single cloud to ground lightning in the terrestrial atmosphere and its influence in the electron transport. This scheme allows us to study halos, a relatively frequent type of TLE. The second model solves the Maxwell equations for the electromagnetic field produced by a lightning discharge coupled with the Langevin's equation for the induced currents in the ionosphere. This model is useful to investigate elves, a fast TLE produced by lightning or by NBP. In addition, both models are coupled with a detailed chemistry of the electronically and vibrationally excited states of molecular nitrogen, allowing us to calculate synthetic spectra of both halos and elves. The models also include a detailed set of kinetic reactions to calculate the temporal evolution of other species. Our results suggest an important enhancement of some molecular species produced by halos, as NOx , N2 O and other metastable species. The quantification of their production could be useful to understand the role of thunderstorms in the climate of our planet. In the case of TLEs produced by NBP, our model confirms the appearance of double elves and allows us to compute their spectral characteristics.

  2. Time-varying boundaries for diffusion models of decision making and response time

    NARCIS (Netherlands)

    Zhang, S.; Lee, M.D.; Vandekerckhove, J.; Maris, G.; Wagenmakers, E.-J.

    2014-01-01

    Diffusion models are widely-used and successful accounts of the time course of two-choice decision making. Most diffusion models assume constant boundaries, which are the threshold levels of evidence that must be sampled from a stimulus to reach a decision. We summarize theoretical results from

  3. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  4. Continuum-time Hamiltonian for the Baxter's model

    International Nuclear Information System (INIS)

    Libero, V.L.

    1983-01-01

    The associated Hamiltonian for the symmetric eight-vertex model is obtained by taking the time-continuous limit in an equivalent Ashkin-Teller model. The result is a Heisenberg Hamiltonian with coefficients J sub(x), J sub(y) and J sub(z) identical to those found by Sutherland for choices of the parameters a, b, c and d that bring the model close to the transition. The change in the operators is accomplished explicitly, the relation between the crossover operator for the Ashkin-Teller model and the energy operator for the eight-vertex model being obtained in a transparent form. (Author) [pt

  5. Extended Cellular Automata Models of Particles and Space-Time

    Science.gov (United States)

    Beedle, Michael

    2005-04-01

    Models of particles and space-time are explored through simulations and theoretical models that use Extended Cellular Automata models. The expanded Cellular Automata Models consist go beyond simple scalar binary cell-fields, into discrete multi-level group representations like S0(2), SU(2), SU(3), SPIN(3,1). The propagation and evolution of these expanded cellular automatas are then compared to quantum field theories based on the "harmonic paradigm" i.e. built by an infinite number of harmonic oscillators, and with gravitational models.

  6. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  7. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    Science.gov (United States)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  8. Snyder-de Sitter model from two-time physics

    International Nuclear Information System (INIS)

    Carrisi, M. C.; Mignemi, S.

    2010-01-01

    We show that the symplectic structure of the Snyder model on a de Sitter background can be derived from two-time physics in seven dimensions and propose a Hamiltonian for a free particle consistent with the symmetries of the model.

  9. Modeling of water treatment plant using timed continuous Petri nets

    Science.gov (United States)

    Nurul Fuady Adhalia, H.; Subiono, Adzkiya, Dieky

    2017-08-01

    Petri nets represent graphically certain conditions and rules. In this paper, we construct a model of the Water Treatment Plant (WTP) using timed continuous Petri nets. Specifically, we consider that (1) the water pump always active and (2) the water source is always available. After obtaining the model, the flow through the transitions and token conservation laws are calculated.

  10. A fire management simulation model using stochastic arrival times

    Science.gov (United States)

    Eric L. Smith

    1987-01-01

    Fire management simulation models are used to predict the impact of changes in the fire management program on fire outcomes. As with all models, the goal is to abstract reality without seriously distorting relationships between variables of interest. One important variable of fire organization performance is the length of time it takes to get suppression units to the...

  11. Modeling Root Depth Development with time under some Crop and ...

    African Journals Online (AJOL)

    Five empirical models for the prediction of root depth developed with time under four combinations of crop and tillage management systems have been developed by the method of polynomial regression. Root depth predictions by a general model were severally correlated with root depth predictions by the ...

  12. An EOQ Model for Delayed Deteriorating Items with Linear Time ...

    African Journals Online (AJOL)

    An EOQ model for delayed deteriorating items with linear time dependent holding cost is considered in this paper. This is a little deviation from most inventory models that consider the holding cost to be constant. In this paper, permissible delay in payment is not considered rather the payment is made immediately the ...

  13. Timed Automaton Models for Simple Programmable Logic Controllers

    NARCIS (Netherlands)

    Mader, Angelika H.; Wupper, Hanno

    We give timed automaton models for a class of Programmable Logic Controller (PLC) applications, that are programmed in a simple fragment of the language Instruction Lists as defined in the standard IEC 1131-3. Two different approaches for modelling timers are suggested, that lead to two different

  14. Producer-decomposer matching in a simple model ecosystem: A network coevolutionary approach to ecosystem organization

    International Nuclear Information System (INIS)

    Higashi, Masahiko; Yamamura, Norio; Nakajima, Hisao; Abe, Takuya

    1993-01-01

    The present not is concerned with how the ecosystem maintains its energy and matter processes, and how those processes change throughout ecological and geological time, or how the constituent biota of an ecosystem maintain their life, and how ecological (species) succession and biological evolution proceed within an ecosystem. To advance further Tansky's (1976) approach to ecosystem organization, which investigated the characteristic properties of the developmental process of a model ecosystem, by applying Margalef's (1968) maximum maturity principle to derive its long term change, we seek a course for deriving the macroscopic trends along the organization process of an ecosystem as a consequence of the interactions among its biotic components and their modification of ecological traits. Using a simple ecosystem model consisting of four aggregated components (open-quotes compartmentsclose quotes) connected by nutrient flows, we investigate how a change in the value of a parameter alters the network pattern of flows and stocks, even causing a change in the value of another parameter, which in turn brings about further change in the network pattern and values of some (possible original) parameters. The continuation of this chain reaction involving feedbacks constitutes a possible mechanism for the open-quotes coevolutionclose quotes or open-quotes matchingclose quotes among flows, stocks, and parameters

  15. The burning fuse model of unbecoming in time

    Science.gov (United States)

    Norton, John D.

    2015-11-01

    In the burning fuse model of unbecoming in time, the future is real and the past is unreal. It is used to motivate the idea that there is something unbecoming in the present literature on the metaphysics of time: its focus is merely the assigning of a label "real."

  16. Hidden Markov Models for Time Series An Introduction Using R

    CERN Document Server

    Zucchini, Walter

    2009-01-01

    Illustrates the flexibility of HMMs as general-purpose models for time series data. This work presents an overview of HMMs for analyzing time series data, from continuous-valued, circular, and multivariate series to binary data, bounded and unbounded counts and categorical observations.

  17. Mean Time to System Failure and Availability Modeling of a ...

    African Journals Online (AJOL)

    In this study models for mean time to system failure and availability have been developed to study the effect of failure rate on some measures of system effectiveness. Using Chapman Kolmogorov's forward equations methods, explicit expressions for measures of system effectiveness like mean time to system failure (MTSF) ...

  18. Sparse time series chain graphical models for reconstructing genetic networks

    NARCIS (Netherlands)

    Abegaz, Fentaw; Wit, Ernst

    We propose a sparse high-dimensional time series chain graphical model for reconstructing genetic networks from gene expression data parametrized by a precision matrix and autoregressive coefficient matrix. We consider the time steps as blocks or chains. The proposed approach explores patterns of

  19. Modeling electricity loads in California: a continuous-time approach

    Science.gov (United States)

    Weron, R.; Kozłowska, B.; Nowicka-Zagrajek, J.

    2001-10-01

    In this paper we address the issue of modeling electricity loads and prices with diffusion processes. More specifically, we study models which belong to the class of generalized Ornstein-Uhlenbeck processes. After comparing properties of simulated paths with those of deseasonalized data from the California power market and performing out-of-sample forecasts we conclude that, despite certain advantages, the analyzed continuous-time processes are not adequate models of electricity load and price dynamics.

  20. Experimental Validation of Plastic Mandible Models Produced by a “Low-Cost” 3-Dimensional Fused Deposition Modeling Printer

    Science.gov (United States)

    Maschio, Federico; Pandya, Mirali; Olszewski, Raphael

    2016-01-01

    Background The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Material/Methods Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. Results The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Conclusions Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field. PMID:27003456

  1. Experimental Validation of Plastic Mandible Models Produced by a "Low-Cost" 3-Dimensional Fused Deposition Modeling Printer.

    Science.gov (United States)

    Maschio, Federico; Pandya, Mirali; Olszewski, Raphael

    2016-03-22

    The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field.

  2. Reactive Aggregate Model Protecting Against Real-Time Threats

    Science.gov (United States)

    2014-09-01

    SQL server and has four tables: Accumulator, BlockState, Epoc , Signature, and Weight. Accumulator columns were RemoteIP, Test1, Test2, Test3 and Time...block occurred. The Epoc table was a pivot necessary to convert the time stamps to Epoch time format. The Signature table held values indicative of...processing unit” of the RAMPART Figure 6. model. The RAMPART database exists on a Windows SQL server and has four tables: Accumulator, BlockState, Epoc

  3. Continuous time modelling of dynamical spatial lattice data observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper

    2007-01-01

    , and they exhibit spatial interaction. For specificity we consider a particular dynamical spatial lattice data set which has previously been analysed by a discrete time model involving unknown normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared......Summary. We consider statistical and computational aspects of simulation-based Bayesian inference for a spatial-temporal model based on a multivariate point process which is only observed at sparsely distributed times. The point processes are indexed by the sites of a spatial lattice...

  4. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  5. Can DEM time series produced by UAV be used to quantify diffuse erosion in an agricultural watershed?

    Science.gov (United States)

    Pineux, N.; Lisein, J.; Swerts, G.; Bielders, C. L.; Lejeune, P.; Colinet, G.; Degré, A.

    2017-03-01

    Erosion and deposition modelling should rely on field data. Currently these data are seldom available at large spatial scales and/or at high spatial resolution. In addition, conventional erosion monitoring approaches are labour intensive and costly. This calls for the development of new approaches for field erosion data acquisition. As a result of rapid technological developments and low cost, unmanned aerial vehicles (UAV) have recently become an attractive means of generating high resolution digital elevation models (DEMs). The use of UAV to observe and quantify gully erosion is now widely established. However, in some agro-pedological contexts, soil erosion results from multiple processes, including sheet and rill erosion, tillage erosion and erosion due to harvest of root crops. These diffuse erosion processes often represent a particular challenge because of the limited elevation changes they induce. In this study, we propose to assess the reliability and development perspectives of UAV to locate and quantify erosion and deposition in a context of an agricultural watershed with silt loam soils and a smooth relief. Erosion and deposition rates derived from high resolution DEM time series are compared to field measurements. The UAV technique demonstrates a high level of flexibility and can be used, for instance, after a major erosive event. It delivers a very high resolution DEM (pixel size: 6 cm) which allows us to compute high resolution runoff pathways. This could enable us to precisely locate runoff management practices such as fascines. Furthermore, the DEMs can be used diachronically to extract elevation differences before and after a strongly erosive rainfall and be validated by field measurements. While the analysis for this study was carried out over 2 years, we observed a tendency along the slope from erosion to deposition. Erosion and deposition patterns detected at the watershed scale are also promising. Nevertheless, further development in the

  6. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  7. Space-time modeling of electricity spot prices

    DEFF Research Database (Denmark)

    Abate, Girum Dagnachew; Haldrup, Niels

    In this paper we derive a space-time model for electricity spot prices. A general spatial Durbin model that incorporates the temporal as well as spatial lags of spot prices is presented. Joint modeling of space-time effects is necessarily important when prices and loads are determined in a network...... of power exchange areas. We use data from the Nord Pool electricity power exchange area bidding markets. Different spatial weight matrices are considered to capture the structure of the spatial dependence process across different bidding markets and statistical tests show significant spatial dependence...

  8. Time dependent modeling of non-LTE plasmas: Final report

    International Nuclear Information System (INIS)

    1988-06-01

    During the period of performance of this contract Science Applications International Corporation (SAIC) has aided Lawrence Livermore National Laboratory (LLNL) in the development of an unclassified modeling tool for studying time evolution of high temperature ionizing and recombining plasmas. This report covers the numerical code developed, (D)ynamic (D)etailed (C)onfiguration (A)ccounting (DDCA), which was written to run on the National Magnetic Fusion Energy Computing Center (NMFECC) network as well as the classified Livermore Computer Center (OCTOPUS) network. DDCA is a One-Dimensional (1D) time dependent hydrodynamic model which makes use of the non-LTE detailed atomic physics ionization model DCA. 5 refs

  9. Towards Using Smartphones to Refine Sunrise and Sunset Time Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2015-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have a minimum error of about one minute. Particularly at higher latitudes, slight changes in refraction may result in significant discrepancies, such as causing the Sun to appear to set several minutes prematurely or remain continuously above the horizon for an unexpectedly long time. Atmospheric models could be better constrained by a substantial collection of observed sunset times with associated meteorological data such temperature, pressure and height of observer. We report on the development of a project recording the necessary data with a few smartphones that will then be the groundwork of a citizen science project.

  10. Spatio-temporal modeling for real-time ozone forecasting.

    Science.gov (United States)

    Paci, Lucia; Gelfand, Alan E; Holland, David M

    2013-05-01

    The accurate assessment of exposure to ambient ozone concentrations is important for informing the public and pollution monitoring agencies about ozone levels that may lead to adverse health effects. High-resolution air quality information can offer significant health benefits by leading to improved environmental decisions. A practical challenge facing the U.S. Environmental Protection Agency (USEPA) is to provide real-time forecasting of current 8-hour average ozone exposure over the entire conterminous United States. Such real-time forecasting is now provided as spatial forecast maps of current 8-hour average ozone defined as the average of the previous four hours, current hour, and predictions for the next three hours. Current 8-hour average patterns are updated hourly throughout the day on the EPA-AIRNow web site. The contribution here is to show how we can substantially improve upon current real-time forecasting systems. To enable such forecasting, we introduce a downscaler fusion model based on first differences of real-time monitoring data and numerical model output. The model has a flexible coefficient structure and uses an efficient computational strategy to fit model parameters. Our hybrid computational strategy blends continuous background updated model fitting with real-time predictions. Model validation analyses show that we are achieving very accurate and precise ozone forecasts.

  11. Identification of Cereulide-Producing Bacillus cereus by Nucleic Acid Chromatography and Reverse Transcription Real-Time PCR.

    Science.gov (United States)

    Ueda, Shigeko; Yamaguchi, Manami; Eguchi, Kayoko; Iwase, Miki

    2016-01-01

    RNA extracts were analyzed with a nucleic acid sequence-based amplification (NASBA) - nucleic acid chromatography and a reverse transcription-quantitative PCR assay (RT-qPCR) based on the TaqMan probe for identification of cereulide-producing Bacillus cereus. All 100 emetic B. cereus strains were found to give positive results, but 50 diarrheal B. cereus strains and other bacterial species showed negative results in the NASBA-chromatography. That is, the assay could selectively identify the emetic strains among B. cereus strains. Also, the B. cereus contents of more than 10(7) cfu/ml were required for the identification of the cereulide-producing strains in this assay. In qRT-PCR assays, all 100 emetic type strains of B. cereus produced 10(2) - 10(4) copy numbers per ng of the RNA preparation, and the strains produced 10(4) copies including ones which had the high vacuolation activities of HEp-2 cells.

  12. Evaluation of digital model accuracy and time-dependent deformation of alginate impressions.

    Science.gov (United States)

    Cesur, M G; Omurlu, I K; Ozer, T

    2017-09-01

    The aim of this study was to evaluate the accuracy of digital models produced with the three-dimensional dental scanner, and to test the dimensional stability of alginate impressions for durations of immediately (T0), 1 day (T1), and 2 days (T2). A total of sixty impressions were taken from a master model with an alginate, and were poured into plaster models in three different storage periods. Twenty impressions were directly scanned (negative digital models), after which plaster models were poured and scanned (positive digital models) immediately. The remaining 40 impressions were poured after 1 and 2 days. In total, 9 points and 11 linear measurements were used to analyze the plaster models, and negative and positive digital models. Time-dependent deformation of the alginate impressions and the accuracy of the conventional plaster models and digital models were evaluated separately. Plaster models, negative and positive digital models showed significant differences in nearly all measurements at T (0), T (1), and T (2) times (P 0.05), but they demonstrated statistically significant differences at T (2) time (P impressions is practicable method for orthodontists.

  13. Chromospheric extents predicted by time-dependent acoustic wave models

    Science.gov (United States)

    Cuntz, Manfred

    1990-01-01

    Theoretical models for chromospheric structures of late-type giant stars are computed, including the time-dependent propagation of acoustic waves. Models with short-period monochromatic shock waves as well as a spectrum of acoustic waves are discussed, and the method is applied to the stars Arcturus, Aldebaran, and Betelgeuse. Chromospheric extent, defined as the monotonic decrease with height of the time-averaged electron densities, are found to be 1.12, 1.13, and 1.22 stellar radii for the three stars, respectively; this corresponds to a time-averaged electron density of 10 to the 7th/cu cm. Predictions of the extended chromospheric obtained using a simple scaling law agree well with those obtained by the time-dependent wave models; thus, the chromospheres of all stars for which the scaling law is valid consist of the same number of pressure scale heights.

  14. Chromospheric extents predicted by time-dependent acoustic wave models

    Energy Technology Data Exchange (ETDEWEB)

    Cuntz, M. (Joint Institute for Laboratory Astrophysics, Boulder, CO (USA) Heidelberg Universitaet (Germany, F.R.))

    1990-01-01

    Theoretical models for chromospheric structures of late-type giant stars are computed, including the time-dependent propagation of acoustic waves. Models with short-period monochromatic shock waves as well as a spectrum of acoustic waves are discussed, and the method is applied to the stars Arcturus, Aldebaran, and Betelgeuse. Chromospheric extent, defined as the monotonic decrease with height of the time-averaged electron densities, are found to be 1.12, 1.13, and 1.22 stellar radii for the three stars, respectively; this corresponds to a time-averaged electron density of 10 to the 7th/cu cm. Predictions of the extended chromospheric obtained using a simple scaling law agree well with those obtained by the time-dependent wave models; thus, the chromospheres of all stars for which the scaling law is valid consist of the same number of pressure scale heights. 74 refs.

  15. Use of simulation models to study the dynamic of recall of non-conform perishable produce through the supply chain

    DEFF Research Database (Denmark)

    Busato, P.; Sopegno, A.; Berruto, R.

    2013-01-01

    with investigating the traceability within the firm, little work has been carried out on the investigation of the traceability system over the whole supply chain. The supply-chain of fresh produce is constituted of many links: producer/grower, warehouse, packing centre, distribution centre, retailers and finally...... the consumer. Each of these is a system itself that interacts with the other components of the supply-chain. The nonconformity could occur in each of these links. Because of processing plant requirement, storage requirements, and because of savings in the traceability process, often small size lots are merged...... together to form a large size lot at some points in the supply-chain. Larger lot size could imply higher risk for the consumers in case of recall of the produce and much higher recall time and cost for the supply-chain. When a non-conformity occurs, the time to recall the produce depends on many factors...

  16. Parameterizing unconditional skewness in models for financial time series

    DEFF Research Database (Denmark)

    He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...... unconditional skewness. We consider modelling the unconditional mean and variance using models that respond nonlinearly or asymmetrically to shocks. We investigate the implications of these models on the third-moment structure of the marginal distribution as well as conditions under which the unconditional...... distribution exhibits skewness and nonzero third-order autocovariance structure. In this respect, an asymmetric or nonlinear specification of the conditional mean is found to be of greater importance than the properties of the conditional variance. Several examples are discussed and, whenever possible...

  17. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  18. Forecast model of landslides in a short time

    International Nuclear Information System (INIS)

    Sanchez Lopez, Reinaldo

    2006-01-01

    The IDEAM in development of their functions as member of the national technical committee for the prevention and disasters attention (SNPAD) accomplishes the follow-up, monitoring and forecast in real time of the environmental dynamics that in extreme situations constitute threats and natural risks. One of the frequent dynamics and of greater impact is related to landslides, those that affect persistently the life of the persons, the infrastructure, the socioeconomic activities and the balance of the environment. The landslide in Colombia and in the world are caused mainly by effects of the rain, due to that, IDEAM has come developing forecast model, as an instrument for risk management in a short time. This article presents aspects related to their structure, operation, temporary space resolution, products, results, achievements and projections of the model. Conceptually, the model is support by the principle of the dynamic temporary - space, of the processes that consolidate natural hazards, particularly in areas where the man has come building the risk. Structurally, the model is composed by two sub-models; the general susceptibility of the earthly model and the critical rain model as a denotative factor, that consolidate the hazard process. In real time, the model, works as a GIS, permitting the automatic zoning of the landslides hazard for issue public advisory warming to help makers decisions on the risk that cause frequently these events, in the country

  19. A continuous-time neural model for sequential action.

    Science.gov (United States)

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  20. Modeling of the Response Time of Thermal Flow Sensors

    Directory of Open Access Journals (Sweden)

    Walter Lang

    2011-10-01

    Full Text Available This paper introduces a simple theoretical model for the response time of thermal flow sensors. Response time is defined here as the time needed by the sensor output signal to reach 63.2% of amplitude due to a change of fluid flow. This model uses the finite-difference method to solve the heat transfer equations, taking into consideration the transient conduction and convection between the sensor membrane and the surrounding fluid. Program results agree with experimental measurements and explain the response time dependence on the velocity and the sensor geometry. Values of the response time vary from about 5 ms in the case of stagnant flow to 1.5 ms for a flow velocity of 44 m/s.

  1. Alighting and Boarding Time Model of Passengers at a LRT Station in Kuala Lumpur

    Directory of Open Access Journals (Sweden)

    Hor Peay San

    2017-01-01

    Full Text Available A research was conducted to study the factors affecting the alighting and boarding rate of passengers and establish a prediction model for alighting and boarding time of passengers for a passenger rail service in Malaysia. Data was collected at the KL Sentral LRT station during the morning and evening peak hours for a period of 5 working days. Results show that passenger behaviour, passenger volume, crowdedness in train and mixture of flow has significant effects on the alighting and boarding time though mixture of flow is not significant in the prediction model produced due to the passenger behaviour at the platform.

  2. Real time modeling, simulation and control of dynamical systems

    CERN Document Server

    Mughal, Asif Mahmood

    2016-01-01

    This book introduces modeling and simulation of linear time invariant systems and demonstrates how these translate to systems engineering, mechatronics engineering, and biomedical engineering. It is organized into nine chapters that follow the lectures used for a one-semester course on this topic, making it appropriate for students as well as researchers. The author discusses state space modeling derived from two modeling techniques and the analysis of the system and usage of modeling in control systems design. It also contains a unique chapter on multidisciplinary energy systems with a special focus on bioengineering systems and expands upon how the bond graph augments research in biomedical and bio-mechatronics systems.

  3. Time-dependent model of the Martian atmosphere for use in orbit lifetime and sustenance studies

    Science.gov (United States)

    Culp, R. D.; Stewart, A. I.

    1984-01-01

    A time-dependent model of the Martian atmosphere suitable for calculation of long-term aerodynamic effects on low altitude satellites is presented. The atmospheric model is both position dependent, through latitude and longitude effects, and time dependent. The time dependency includes diurnal and seasonal effects, effects of annual motion, long and short term solar activity effects, and periodic dust storm effects. Nine constituent gases are included in the model. Uncertainties in exospheric temperature, turbidity, and turbopause altitude are used to produce bounds on the expected density. A computer model - a Fortran subroutine which, when given the Julian date, Cartesian position of the sun and the spacecraft in aerocentric coordinates, returns the local values of mass density, temperature, scale height, and upper and lower bounds on the mass density is presented.

  4. Time-dependent model of the Martian atmosphere for use in orbit lifetime and sustenance studies

    Science.gov (United States)

    Culp, R. D.; Stewart, A. I.

    1984-09-01

    A time-dependent model of the Martian atmosphere suitable for calculation of long-term aerodynamic effects on low altitude satellites is presented. The atmospheric model is both position dependent, through latitude and longitude effects, and time dependent. The time dependency includes diurnal and seasonal effects, effects of annual motion, long and short term solar activity effects, and periodic dust storm effects. Nine constituent gases are included in the model. Uncertainties in exospheric temperature, turbidity, and turbopause altitude are used to produce bounds on the expected density. A computer model - a Fortran subroutine which, when given the Julian date, Cartesian position of the sun and the spacecraft in aerocentric coordinates, returns the local values of mass density, temperature, scale height, and upper and lower bounds on the mass density is presented.

  5. Large time periodic solutions to coupled chemotaxis-fluid models

    Science.gov (United States)

    Jin, Chunhua

    2017-12-01

    In this paper, we deal with the time periodic problem to coupled chemotaxis-fluid models. We prove the existence of large time periodic strong solutions for the full chemotaxis-Navier-Stokes system in spatial dimension N=2, and the existence of large time periodic strong solutions for the chemotaxis-Stokes system in spatial dimension N=3. On the basis of these, the regularity of the solutions can be further improved. More precisely speaking, if the time periodic source g and the potential force \

  6. Reading and a diffusion model analysis of reaction time.

    Science.gov (United States)

    Naples, Adam; Katz, Leonard; Grigorenko, Elena L

    2012-01-01

    Processing speed is associated with reading performance. However, the literature is not clear either on the definition of processing speed or on why and how it contributes to reading performance. In this study we demonstrated that processing speed, as measured by reaction time, is not a unitary construct. Using the diffusion model of two-choice reaction time, we assessed processing speed in a series of same-different reaction time tasks for letter and number strings. We demonstrated that the association between reaction time and reading performance is driven by processing speed for reading-related information, but not motor or sensory encoding speed.

  7. Resonantly produced 7 keV sterile neutrino dark matter models and the properties of Milky Way satellites.

    Science.gov (United States)

    Abazajian, Kevork N

    2014-04-25

    Sterile neutrinos produced through a resonant Shi-Fuller mechanism are arguably the simplest model for a dark matter interpretation of the origin of the recent unidentified x-ray line seen toward a number of objects harboring dark matter. Here, I calculate the exact parameters required in this mechanism to produce the signal. The suppression of small-scale structure predicted by these models is consistent with Local Group and high-z galaxy count constraints. Very significantly, the parameters necessary in these models to produce the full dark matter density fulfill previously determined requirements to successfully match the Milky Way Galaxy's total satellite abundance, the satellites' radial distribution, and their mass density profile, or the "too-big-to-fail problem." I also discuss how further precision determinations of the detailed properties of the candidate sterile neutrino dark matter can probe the nature of the quark-hadron transition, which takes place during the dark matter production.

  8. Predicting third molar surgery operative time: a validated model.

    Science.gov (United States)

    Susarla, Srinivas M; Dodson, Thomas B

    2013-01-01

    The purpose of the present study was to develop and validate a statistical model to predict third molar (M3) operative time. This was a prospective cohort study consisting of a sample of subjects presenting for M3 removal. The demographic, anatomic, and operative variables were recorded for each subject. Using an index sample of randomly selected subjects, a multiple linear regression model was generated to predict the operating time. A nonoverlapping group of randomly selected subjects (validation sample) was used to assess model accuracy. P≤.05 was considered significant. The sample was composed of 150 subjects (n) who had 450 (k) M3s removed. The index sample (n=100 subjects, k=313 M3s extracted) had a mean age of 25.4±10.0 years. The mean extraction time was 6.4±7.0 minutes. The multiple linear regression model included M3 location, Winter's classification, tooth morphology, number of teeth extracted, procedure type, and surgical experience (R2=0.58). No statistically significant differences were seen between the index sample and the validation sample (n=50, k=137) for any of the study variables. Compared with the index model, the β-coefficients of the validation model were similar in direction and magnitude for most variables. Compared with the observed extraction time for all teeth in the sample, the predicted extraction time was not significantly different (P=.16). Fair agreement was seen between the β-coefficients for our multiple models in the index and validation populations, with no significant difference in the predicted and observed operating times. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.

    Science.gov (United States)

    Stránský, V; Thinová, L

    2017-11-01

    In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Mechatronic modeling of real-time wheel-rail contact

    CERN Document Server

    Bosso, Nicola; Gugliotta, Antonio; Somà, Aurelio

    2013-01-01

    Real-time simulations of the behaviour of a rail vehicle require realistic solutions of the wheel-rail contact problem which can work in a real-time mode. Examples of such solutions for the online mode have been well known and are implemented within standard and commercial tools for the simulation codes for rail vehicle dynamics. This book is the result of the research activities carried out by the Railway Technology Lab of the Department of Mechanical and Aerospace Engineering at Politecnico di Torino. This book presents work on the project for the development of a real-time wheel-rail contact model and provides the simulation results obtained with dSpace real-time hardware. Besides this, the implementation of the contact model for the development of a real-time model for the complex mechatronic system of a scaled test rig is presented in this book and may be useful for the further validation of the real-time contact model with experiments on a full scale test rig.

  11. Space and Time Ontology: New Models for New Physics

    Directory of Open Access Journals (Sweden)

    Sara Lumbreras Sancho

    2015-02-01

    Full Text Available Nickel proposes a model for movement – and in general, for change – in which each instant in time (characterized as the set of real numbers is assigned to one point in a configuration space. As much as this model seems to intuitively fit to our experience, it implies a number of assumptions about the nature of space and time that are interesting to explore. Different perspectives have been developed across History, and it could well be that the next scientific revolution is set in motion by an innovative conception of space and time. One of this alternative perspectives was proposed by Julian Barbour, who has developed a new model of Physics where time does not exist [Barbour, 1999]. This paper reviews not only this concept but also other similarly provocative ideas that might prove useful for improving our understanding of the universe. Prior to this, the relevance of the philosophy of space and time will be briefly outlined and its history reviewed to provide some background for the discussed models. Finally, an approach where space and time are only defined by convention will be considered.

  12. Space-time scenarios of wind power generation produced using a Gaussian copula with parametrized precision matrix

    DEFF Research Database (Denmark)

    Tastu, Julija; Pinson, Pierre; Madsen, Henrik

    The emphasis in this work is placed on generating space-time trajectories (also referred to as scenarios) of wind power generation. This calls for prediction of multivariate densities describing wind power generation at a number of distributed locations and for a number of successive lead times. ...... and direction-dependent cross-correlations. Accounting for the space-time effects is shown to be crucial for generating high quality scenarios....

  13. Analysis and modeling of sintering of Sr-hexaferrite produced by PIM technology

    Directory of Open Access Journals (Sweden)

    Zlatkov B.S.

    2011-01-01

    Full Text Available The powder injection moulding (PIM technology is lately becoming more and more significant due to complex design possibilities and good repeatability. This technology requires optimization of all steps starting with material and binder, injection, debinding and sintering parameters. Sintering is one of the key links in this technology. The powder injection moulding process is specific as during feedstock injection powder particles mixed into the binder do not come into mechanical contact. Shrinkage during sintering of PIM samples is high. In this work we have analyzed and modeled the sintering process of isotropic PIM samples of Sr-hexaferrite. The Master Sintering Curve (MSC principle has been applied to analyze sintering of two types of PIM Sr-hexaferrite samples with completely removed binder and only the extraction step of the debinding procedure (thermal debinding proceeding simultaneously with sintering. Influence of the heating rate on resulting sample microstructures has also been analyzed. Influence of the sintering time and temperature was analyzed using three different phenomenological equations.

  14. Model Passengers’ Travel Time for Conventional Bus Stop

    Directory of Open Access Journals (Sweden)

    Guangzhao Xin

    2014-01-01

    Full Text Available Limited number of berths can result in a subsequent bus stopping at the upstream of a bus stop when all berths are occupied. When this traffic phenomenon occurs, passengers waiting on the platform usually prefer walking to the stopped bus, which leads to additional walking time before boarding the bus. Therefore, passengers’ travel time consumed at a bus stop is divided into waiting time, additional walking time, and boarding time. This paper proposed a mathematical model for analyzing passengers’ travel time at conventional bus stop based on theory of stochastic service system. Field-measured and simulated data were designated to demonstrate the effectiveness of the proposed model. By analyzing the results, conclusion was conducted that short headway can reduce passengers’ waiting time at bus stop. Meanwhile, the theoretical analysis explained the inefficiency of bus stops with more than three berths from the perspective of passengers’ additional walking time. Additional walking time will increase in a large scale when the number of berths at a bus stop exceedsthe threshold of three.

  15. Experimental Model of Cerebral Hypoperfusion Produced Memory-learning Deficits, and Modifications in Gene Expression

    Directory of Open Access Journals (Sweden)

    Rilda LEÓN

    2015-01-01

    Full Text Available Cerebral ischemia is a major cause of death, for this reason animal models of cerebral ischemia are widely used to study both the pathophysiology of ischemic phenomenon and the evaluation of possible therapeutic agents with protective or regenerative properties. The objectives of this study were to examine the presence of neuronal damage in different brain areas following the ischemic event, and assess consequences of such activities on the processes of memory and learning. The study group included an experimental group ischemic animals (30 rats with permanent bilateral occlusion of the carotids, and a control group. Was evaluated gene expression and inflammatory ischemic by qPCR techniques 24h post injury, brain tissue morphology in areas of cortex, striatum and hippocampus seven days post injury and processes of memory and learning, 12 days post injury. The morphological studies showed that the procedure induces death of cell populations in cortex, striatum and hippocampus, ischemia modified gfap gene expression and ho, il-6, il-17 and ifn-γ, which can be used as a marker of early ischemic process. Additionally, the ischemic injury caused spatial memory decline. This characterization gives us an experimental model to develop future studies on the pathophysiology of ischemic events and assessing therapeutic strategies. MODELO EXPERIMENTAL DE HIPOPERFUSIÓN CEREBRAL PRODUCE DÉFICIT DE LA MEMORIA Y APRENDIZAJE Y MODIFICACIONES EN LA EXPRESIÓN DE GENES. A escala mundial, la isquemia cerebral constituye una de las principales causas de muerte, por lo que los modelos animales de isquemia cerebral son extensamente usados tanto en el estudio de la pato-fisiología del fenómeno isquémico; como en la evaluación de agentes terapéuticos con posible efecto protector o regenerador.  Los objetivos de este estudio fueron examinar la presencia de daño neuronal en diferentes áreas cerebrales como consecuencia del evento isquémico; así como evaluar

  16. SEM Based CARMA Time Series Modeling for Arbitrary N.

    Science.gov (United States)

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  17. An economic production model for time dependent demand with rework and multiple production setups

    Directory of Open Access Journals (Sweden)

    S.R. Singh

    2014-04-01

    Full Text Available In this paper, we present a model for time dependent demand with multiple productions and rework setups. Production is demand dependent and greater than the demand rate. Production facility produces items in m production setups and one rework setup (m, 1 policy. The major reason of reverse logistic and green supply chain is rework, so it reduces the cost of production and other ecological problems. Most of the researchers developed a rework model without deteriorating items. A numerical example and sensitivity analysis is shown to describe the model.

  18. The TIME Model: Time to Make a Change to Integrate Technology

    Directory of Open Access Journals (Sweden)

    Debby Mitchell

    2004-06-01

    Full Text Available The purpose of this article is to report the successful creation and implementation of an instructional model designed to assist educators in infusing technology into the curriculum while at the same time create opportunities for faculty to learn, become more proficient,and successful at integrating technology into their own classroom curriculum.The model was successfully tested and implemented with faculty, inservice and preservice teachers at the University of Central Florida (UCF. Faculty, inservice, and preservice teachers were successfully trained to integrate technology using a theme based curriculum with an instructional model called the TIME model which consists of twelve elements that include: Vision, Incentives, Personalization, Awareness, Learning Communities, Action Plan, Research, Development of Modules, Skills, Implementation, Evidence of Change, and Evaluation/Reflection.

  19. Stylised facts of financial time series and hidden Markov models in continuous time

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2015-01-01

    Hidden Markov models are often applied in quantitative finance to capture the stylised facts of financial returns. They are usually discrete-time models and the number of states rarely exceeds two because of the quadratic increase in the number of parameters with the number of states. This paper...... presents an extension to continuous time where it is possible to increase the number of states with a linear rather than quadratic growth in the number of parameters. The possibility of increasing the number of states leads to a better fit to both the distributional and temporal properties of daily returns....

  20. Rising time restoration for nuclear pulse using a mathematic model.

    Science.gov (United States)

    Wang, Min; Hong, Xu; Zhou, Jian-Bin; Liu, Ying; Fei, Peng; Wan, Wen-Jie; Zhou, Wei; Ma, Ying-Jie; Liu, Yi; Ding, Wei-Cheng

    2018-01-17

    The rising time of a nuclear pulse is slowed before being digitized because of the effect of distributed capacitance and resistance. This results in the waveform distortion of a shaped pulse. In this study, the effect of distributed capacitance and resistance is equivalent to the result of RC network. The mathematical model of the network is established to restore the rising time of the input nuclear pulse. Experimental results show that the leading edge of the nuclear pulse becomes steep after rising time restoration, and the shape of the shaped pulse is also improved. The energy spectrum obtained with rising time restoration is compared with that without rising time restoration. The comparison result indicates that using rising time restoration can extend the measurement range of pulse amplitude without affecting the energy resolution of the system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Clean Floquet Time Crystals: Models and Realizations in Cold Atoms

    Science.gov (United States)

    Huang, Biao; Wu, Ying-Hai; Liu, W. Vincent

    2018-03-01

    Time crystals, a phase showing spontaneous breaking of time-translation symmetry, has been an intriguing subject for systems far away from equilibrium. Recent experiments found such a phase in both the presence and the absence of localization, while in theories localization by disorder is usually assumed a priori. In this work, we point out that time crystals can generally exist in systems without disorder. A series of clean quasi-one-dimensional models under Floquet driving are proposed to demonstrate this unexpected result in principle. Robust time crystalline orders are found in the strongly interacting regime along with the emergent integrals of motion in the dynamical system, which can be characterized by level statistics and the out-of-time-ordered correlators. We propose two cold atom experimental schemes to realize the clean Floquet time crystals, one by making use of dipolar gases and another by synthetic dimensions.

  2. Development of constitutive model for composites exhibiting time dependent properties

    International Nuclear Information System (INIS)

    Pupure, L; Joffe, R; Varna, J; Nyström, B

    2013-01-01

    Regenerated cellulose fibres and their composites exhibit highly nonlinear behaviour. The mechanical response of these materials can be successfully described by the model developed by Schapery for time-dependent materials. However, this model requires input parameters that are experimentally determined via large number of time-consuming tests on the studied composite material. If, for example, the volume fraction of fibres is changed we have a different material and new series of experiments on this new material are required. Therefore the ultimate objective of our studies is to develop model which determines the composite behaviour based on behaviour of constituents of the composite. This paper gives an overview of problems and difficulties, associated with development, implementation and verification of such model

  3. Disease Extinction Versus Persistence in Discrete-Time Epidemic Models.

    Science.gov (United States)

    van den Driessche, P; Yakubu, Abdul-Aziz

    2018-04-12

    We focus on discrete-time infectious disease models in populations that are governed by constant, geometric, Beverton-Holt or Ricker demographic equations, and give a method for computing the basic reproduction number, [Formula: see text]. When [Formula: see text] and the demographic population dynamics are asymptotically constant or under geometric growth (non-oscillatory), we prove global asymptotic stability of the disease-free equilibrium of the disease models. Under the same demographic assumption, when [Formula: see text], we prove uniform persistence of the disease. We apply our theoretical results to specific discrete-time epidemic models that are formulated for SEIR infections, cholera in humans and anthrax in animals. Our simulations show that a unique endemic equilibrium of each of the three specific disease models is asymptotically stable whenever [Formula: see text].

  4. Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis

    Science.gov (United States)

    Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.

    2015-06-01

    This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.

  5. Rapid characterization of dry cured ham produced following different PDOs by proton transfer reaction time of flight mass spectrometry (PTR-ToF-MS).

    Science.gov (United States)

    Del Pulgar, José Sánchez; Soukoulis, Christos; Biasioli, Franco; Cappellin, Luca; García, Carmen; Gasperi, Flavia; Granitto, Pablo; Märk, Tilmann D; Piasentier, Edi; Schuhfried, Erna

    2011-07-15

    In the present study, the recently developed proton transfer reaction time of flight mass spectrometry (PTR-ToF-MS) technique was used for the rapid characterization of dry cured hams produced according to 4 of the most important Protected Designations of Origin (PDOs): an Iberian one (Dehesa de Extremadura) and three Italian ones (Prosciutto di San Daniele, Prosciutto di Parma and Prosciutto Toscano). In total, the headspace composition and respective concentration for nine Spanish and 37 Italian dry cured ham samples were analyzed by direct injection without any pre-treatment or pre-concentration. Firstly, we show that the rapid PTR-ToF-MS fingerprinting in conjunction with chemometrics (Principal Components Analysis) indicates a good separation of the dry cured ham samples according to their production process and that it is possible to set up, using data mining methods, classification models with a high success rate in cross validation. Secondly, we exploited the higher mass resolution of the new PTR-ToF-MS, as compared with standard quadrupole based versions, for the identification of the exact sum formula of the mass spectrometric peaks providing analytical information on the observed differences. The work indicates that PTR-ToF-MS can be used as a rapid method for the identification of differences among dry cured hams produced following the indications of different PDOs and that it provides information on some of the major volatile compounds and their link with the implemented manufacturing practices such as rearing system, salting and curing process, manufacturing practices that seem to strongly affect the final volatile organic profile and thus the perceived quality of dry cured ham. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  7. Time fractional capital-induced labor migration model

    Science.gov (United States)

    Ali Balcı, Mehmet

    2017-07-01

    In this study we present a new model of neoclassical economic growth by considering that workers move from regions with lower density of capital to regions with higher density of capital. Since the labor migration and capital flow involves self-similarities in long range time, we use the fractional order derivatives for the time variable. To solve this model we proposed Variational Iteration Method, and studied numerically labor migration flow data from Turkey along with other countries throughout the period of 1966-2014.

  8. Real-time modelling of a pandemic influenza outbreak.

    Science.gov (United States)

    Birrell, Paul J; Pebody, Richard G; Charlett, André; Zhang, Xu-Sheng; De Angelis, Daniela

    2017-10-01

    Real-time modelling is an essential component of the public health response to an outbreak of pandemic influenza in the UK. A model for epidemic reconstruction based on realistic epidemic surveillance data has been developed, but this model needs enhancing to provide spatially disaggregated epidemic estimates while ensuring that real-time implementation is feasible. To advance state-of-the-art real-time pandemic modelling by (1) developing an existing epidemic model to capture spatial variation in transmission, (2) devising efficient computational algorithms for the provision of timely statistical analysis and (3) incorporating the above into freely available software. Markov chain Monte Carlo (MCMC) sampling was used to derive Bayesian statistical inference using 2009 pandemic data from two candidate modelling approaches: (1) a parallel-region (PR) approach, splitting the pandemic into non-interacting epidemics occurring in spatially disjoint regions; and (2) a meta-region (MR) approach, treating the country as a single meta-population with long-range contact rates informed by census data on commuting. Model discrimination is performed through posterior mean deviance statistics alongside more practical considerations. In a real-time context, the use of sequential Monte Carlo (SMC) algorithms to carry out real-time analyses is investigated as an alternative to MCMC using simulated data designed to sternly test both algorithms. SMC-derived analyses are compared with 'gold-standard' MCMC-derived inferences in terms of estimation quality and computational burden. The PR approach provides a better and more timely fit to the epidemic data. Estimates of pandemic quantities of interest are consistent across approaches and, in the PR approach, across regions (e.g. R 0 is consistently estimated to be 1.76-1.80, dropping by 43-50% during an over-summer school holiday). A SMC approach was developed, which required some tailoring to tackle a sudden 'shock' in the data

  9. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields.

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin; Strawn, Laura K

    2016-02-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  10. Modeling annual Coffee production in Ghana using ARIMA time series Model

    Directory of Open Access Journals (Sweden)

    E. Harris

    2013-07-01

    Full Text Available In the international commodity trade, coffee, which represents the world’s most valuable tropical agricultural commodity, comes next to oil. Indeed, it is estimated that about 40 million people in the major producing countries in Africa derive their livelihood from coffee, with Africa accounting for about 12 per cent of global production. The paper applied Autoregressive Integrated Moving Average (ARIMA time series model to study the behavior of Ghana’s annual coffee production as well as make five years forecasts. Annual coffee production data from 1990 to 2010 was obtained from Ghana cocoa board and analyzed using ARIMA. The results showed that in general, the trend of Ghana’s total coffee production follows an upward and downward movement. The best model arrived at on the basis of various diagnostics, selection and an evaluation criterion was ARIMA (0,3,1. Finally, the forecast figures base on Box- Jenkins method showed that Ghana’s annual coffee production will decrease continuously in the next five (5 years, all things being equal

  11. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. A Stochastic Continuous Time Model for Microgrid Energy Management

    OpenAIRE

    Heymann, Benjamin; Frédéric Bonnans, J; Silva, Francisco; Jimenez, Guillermo

    2016-01-01

    International audience; We propose a novel stochastic control formulation for the microgrid energy management problem and extend previous works on continuous time rolling horizon strategy to uncertain demand. We modelize the demand dynamics with a stochastic differential equation. We decompose this dynamics into three terms: an average drift, a time-dependent mean-reversion term and a Brownian noise. We use BOCOPHJB for the numerical simulations. This optimal control toolbox implements a semi...

  13. Detailed models for timing and efficiency in resistive plate chambers

    CERN Document Server

    AUTHOR|(CDS)2067623; Lippmann, Christian

    2003-01-01

    We discuss detailed models for detector physics processes in Resistive Plate Chambers, in particular including the effect of attachment on the avalanche statistics. In addition, we present analytic formulas for average charges and intrinsic RPC time resolution. Using a Monte Carlo simulation including all the steps from primary ionization to the front-end electronics we discuss the dependence of efficiency and time resolution on parameters like primary ionization, avalanche statistics and threshold.

  14. A Staffing Profile of United States Online Database Producers: A Model and Discussion of Educational Implications.

    Science.gov (United States)

    Lowry, Glenn R.

    1983-01-01

    Reports results of survey of United States online database producers designed to determine number of personnel employed in intellectual production of databases and provide basis for generation of people employed in frequently recurring staff categories. Implications for education based on needs suggested by staffing patterns are examined.…

  15. Modelling of stress development and fault slip in and around a producing gas reservoir

    NARCIS (Netherlands)

    Mulders, F.M.M.

    2003-01-01

    Many gas fields are currently being produced in the northern Netherlands. Induced seismicity related to gas production has become a growing problem in the Netherlands in the past two decades. To date, a few hundred induced seismic events occurred. Induced seismicity is generally assumed to be the

  16. A mathematical model for surface roughness of fluidic channels produced by grinding aided electrochemical discharge machining (G-ECDM

    Directory of Open Access Journals (Sweden)

    Ladeesh V. G.

    2017-01-01

    Full Text Available Grinding aided electrochemical discharge machining is a hybrid technique, which combines the grinding action of an abrasive tool and thermal effects of electrochemical discharges to remove material from the workpiece for producing complex contours. The present study focuses on developing fluidic channels on borosilicate glass using G-ECDM and attempts to develop a mathematical model for surface roughness of the machined channel. Preliminary experiments are conducted to study the effect of machining parameters on surface roughness. Voltage, duty factor, frequency and tool feed rate are identified as the significant factors for controlling surface roughness of the channels produced by G-ECDM. A mathematical model was developed for surface roughness by considering the grinding action and thermal effects of electrochemical discharges in material removal. Experiments are conducted to validate the model and the results obtained are in good agreement with that predicted by the model.

  17. TIMING SIGNATURES OF THE INTERNAL-SHOCK MODEL FOR BLAZARS

    International Nuclear Information System (INIS)

    Boettcher, M.; Dermer, C. D.

    2010-01-01

    We investigate the spectral and timing signatures of the internal-shock model for blazars. For this purpose, we develop a semi-analytical model for the time-dependent radiative output from internal shocks arising from colliding relativistic shells in a blazar jet. The emission through synchrotron and synchrotron-self Compton radiation as well as Comptonization of an isotropic external radiation field are taken into account. We evaluate the discrete correlation function (DCF) of the model light curves in order to evaluate features of photon-energy-dependent time lags and the quality of the correlation, represented by the peak value of the DCF. The almost completely analytic nature of our approach allows us to study in detail the influence of various model parameters on the resulting spectral and timing features. This paper focuses on a range of parameters in which the γ-ray production is dominated by Comptonization of external radiation, most likely appropriate for γ-ray bright flat-spectrum radio quasars (FSRQs) or low-frequency peaked BL Lac objects (LBLs). In most cases relevant for FSRQs and LBLs, the variability of the optical emission is highly correlated with the X-ray and high-energy (HE: > 100 MeV) γ-ray emission. Our baseline model predicts a lead of the optical variability with respect to the higher-energy bands by 1-2 hr and of the HE γ-rays before the X-rays by about 1 hr. We show that variations of certain parameters may lead to changing signs of inter-band time lags, potentially explaining the lack of persistent trends of time lags in most blazars.

  18. Interpretation of cloud-climate feedback as produced by 14 atmospheric general circulation models

    Science.gov (United States)

    Cess, R. D.; Potter, G. L.; Ghan, S. J.; Blanchet, J. P.; Boer, G. J.

    1989-01-01

    Understanding the cause of differences among general circulation model projections of carbon dioxide-induced climatic change is a necessary step toward improving the models. An intercomparison of 14 atmospheric general circulation models, for which sea surface temperature perturbations were used as a surrogate climate change, showed that there was a roughly threefold variation in global climate sensitivity. Most of this variation is attributable to differences in the models' depictions of cloud-climate feedback, a result that emphasizes the need for improvements in the treatment of clouds in these models if they are ultimately to be used as climatic predictors.

  19. Simulations of radiocarbon in a coarse-resolution world ocean model 2. Distributions of bomb-produced Carbon 14

    International Nuclear Information System (INIS)

    Toggweiler, J.R.; Dixon, K.; Bryan, K.

    1989-01-01

    Part 1 of this study examined the ability of the Geophysical Fluid Dynamics Laboratory (GFDL) primitive equation ocean general circulation model to simulate the steady state distribution of naturally produced 14 C in the ocean prior to the nuclear bomb tests of the 1950's and early 1960's. In part 2 begin with the steady state distributions of part 1 and subject the model to the pulse of elevated atmospheric 14 C concentrations observed since the 1950's

  20. Single-Index Additive Vector Autoregressive Time Series Models

    KAUST Repository

    LI, YEHUA

    2009-09-01

    We study a new class of nonlinear autoregressive models for vector time series, where the current vector depends on single-indexes defined on the past lags and the effects of different lags have an additive form. A sufficient condition is provided for stationarity of such models. We also study estimation of the proposed model using P-splines, hypothesis testing, asymptotics, selection of the order of the autoregression and of the smoothing parameters and nonlinear forecasting. We perform simulation experiments to evaluate our model in various settings. We illustrate our methodology on a climate data set and show that our model provides more accurate yearly forecasts of the El Niño phenomenon, the unusual warming of water in the Pacific Ocean. © 2009 Board of the Foundation of the Scandinavian Journal of Statistics.

  1. Mathematical modeling of growth of non-O157 Shiga Toxin-producing Escherichia coli in raw ground beef

    Science.gov (United States)

    The objective of this study was to investigate the growth of Shiga toxin-producing Escherichia coli (STEC, including serogroups O45, O103, O111, O121, and O145) in raw ground beef and to develop mathematical models to describe the bacterial growth under different temperature conditions. Three prima...

  2. Statistical modelling of space-time processes with application to wind power

    DEFF Research Database (Denmark)

    Lenzi, Amanda

    . This thesis aims at contributing to the wind power literature by building and evaluating new statistical techniques for producing forecasts at multiple locations and lead times using spatio-temporal information. By exploring the features of a rich portfolio of wind farms in western Denmark, we investigate...... correlation is captured by a latent Gaussian field. We explore how such models can be handled with stochastic partial differential approximations of Matérn Gaussian fields together with integrated nested Laplace approximations. We show that complex hierarchical spatial models are well suited for wind power....... The use of the integrated nested Laplace approximations is motivated by the desire to produce forecasts on large data sets with hundreds of locations, which is critical during periods of high wind penetration. Subsequently, the extension from spatial to spatio-temporal models is iven. Three different...

  3. Time-of-flight estimation based on covariance models

    NARCIS (Netherlands)

    van der Heijden, Ferdinand; Tuquerres, G.; Regtien, Paulus P.L.

    We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a

  4. Nonlinear time-domain modeling of balanced-armature receivers

    DEFF Research Database (Denmark)

    Jensen, Joe; Agerkvist, Finn T.; Harte, James

    2011-01-01

    of the loudspeaker diaphragm inevitably changes the magnetic and electrical characteristics of the loudspeaker. A numerical time-domain model capable of describing these nonlinearities is presented. By simulation it is demonstrated how the output distortion could potentially be reduced significantly through careful...

  5. On the small-time behavior of stochastic logistic models

    Directory of Open Access Journals (Sweden)

    Dung Tien Nguyen

    2017-09-01

    Full Text Available In this paper we investigate the small-time behaviors of the solution to  a stochastic logistic model. The obtained results allow us to estimate the number of individuals in the population and can be used to study stochastic prey-predator systems.

  6. Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach

    Science.gov (United States)

    Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro

    2005-01-01

    Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…

  7. Forecasting with nonlinear time series model: A Monte-Carlo ...

    African Journals Online (AJOL)

    In this paper, we propose a new method of forecasting with nonlinear time series model using Monte-Carlo Bootstrap method. This new method gives better result in terms of forecast root mean squared error (RMSE) when compared with the traditional Bootstrap method and Monte-Carlo method of forecasting using a ...

  8. A Novel Study: A Situation Model Analysis of Reading Times

    Science.gov (United States)

    McNerney, M. Windy; Goodwin, Kerri A.; Radvansky, Gabriel A.

    2011-01-01

    One of the basic findings on situation models and language comprehension is that reading times are affected by the changing event structure in a text. However, many studies have traditionally used multiple, relatively short texts, in which there is little event consistency across the texts. It is unclear to what extent such changes will be…

  9. Modeling information diffusion in time-varying community networks

    Science.gov (United States)

    Cui, Xuelian; Zhao, Narisa

    2017-12-01

    Social networks are rarely static, and they typically have time-varying network topologies. A great number of studies have modeled temporal networks and explored social contagion processes within these models; however, few of these studies have considered community structure variations. In this paper, we present a study of how the time-varying property of a modular structure influences the information dissemination. First, we propose a continuous-time Markov model of information diffusion where two parameters, mobility rate and community attractiveness, are introduced to address the time-varying nature of the community structure. The basic reproduction number is derived, and the accuracy of this model is evaluated by comparing the simulation and theoretical results. Furthermore, numerical results illustrate that generally both the mobility rate and community attractiveness significantly promote the information diffusion process, especially in the initial outbreak stage. Moreover, the strength of this promotion effect is much stronger when the modularity is higher. Counterintuitively, it is found that when all communities have the same attractiveness, social mobility no longer accelerates the diffusion process. In addition, we show that the local spreading in the advantage group has been greatly enhanced due to the agglomeration effect caused by the social mobility and community attractiveness difference, which thus increases the global spreading.

  10. Thermalization time in a model of neutron star

    OpenAIRE

    Ducomet, B.; Nečasová, Š. (Šárka)

    2011-01-01

    We consider an initial boundary value problem for the equation describing heat conduction in a spherical model of neutron star considered by Lattimer et al. We estimate the asymptotic decay of the solution, which provides a plausible estimate for a "thermalization time" for the system.

  11. Hybrid time/frequency domain modeling of nonlinear components

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...

  12. Model checking conditional CSL for continuous-time Markov chains

    DEFF Research Database (Denmark)

    Gao, Yang; Xu, Ming; Zhan, Naijun

    2013-01-01

    In this paper, we consider the model-checking problem of continuous-time Markov chains (CTMCs) with respect to conditional logic. To the end, we extend Continuous Stochastic Logic introduced in Aziz et al. (2000) [1] to Conditional Continuous Stochastic Logic (CCSL) by introducing a conditional p...

  13. Neutrino flavor instabilities in a time-dependent supernova model

    Directory of Open Access Journals (Sweden)

    Sajad Abbar

    2015-12-01

    Full Text Available A dense neutrino medium such as that inside a core-collapse supernova can experience collective flavor conversion or oscillations because of the neutral-current weak interaction among the neutrinos. This phenomenon has been studied in a restricted, stationary supernova model which possesses the (spatial spherical symmetry about the center of the supernova and the (directional axial symmetry around the radial direction. Recently it has been shown that these spatial and directional symmetries can be broken spontaneously by collective neutrino oscillations. In this letter we analyze the neutrino flavor instabilities in a time-dependent supernova model. Our results show that collective neutrino oscillations start at approximately the same radius in both the stationary and time-dependent supernova models unless there exist very rapid variations in local physical conditions on timescales of a few microseconds or shorter. Our results also suggest that collective neutrino oscillations can vary rapidly with time in the regimes where they do occur which need to be studied in time-dependent supernova models.

  14. Rapid Analysis Model: Reducing Analysis Time without Sacrificing Quality.

    Science.gov (United States)

    Lee, William W.; Owens, Diana

    2001-01-01

    Discusses the performance technology design process and the fact that the analysis phase is often being eliminated to speed up the process. Proposes a rapid analysis model that reduces time needed for analysis and still ensures more successful value-added solutions that focus on customer satisfaction. (LRW)

  15. A Case Study Application Of Time Study Model In Paint ...

    African Journals Online (AJOL)

    This paper presents a case study in the development and application of a time study model in a paint manufacturing company. The organization specializes in the production of different grades of paint and paint containers. The paint production activities include; weighing of raw materials, drying of raw materials, dissolving ...

  16. Process model for building quality software on internet time ...

    African Journals Online (AJOL)

    The competitive nature of the software construction market and the inherently exhilarating nature of software itself have hinged the success of any software development project on four major pillars: time to market, product quality, innovation and documentation. Unfortunately, however, existing software development models ...

  17. Multivariate time series modeling of selected childhood diseases in ...

    African Journals Online (AJOL)

    This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...

  18. Perspectives on Global Energy Futures Simulation with the TIME model

    NARCIS (Netherlands)

    de Vries, H.J.M.; Janssen, M.A.; Beusen, A.

    1999-01-01

    Many uncertainties and controversies surround the future of the global energy system. The Targets IMage Energy (TIME) model of which a concise description is given, is used to explore the consequences of divergent assumptions about some uncertain and controversial issues. The IPCC-IS92a Conventional

  19. Modeling the Earth's Gravity Field in Space and Time

    Science.gov (United States)

    WANG, S.; Panet, I.; Ramillien, G.; Guilloux, F.

    2014-12-01

    The Earth constantly deforms as it undergoes dynamic phenomena, such as earthquakes, post-glacial rebound and water displacement in its fluid envelopes. These processes have different spatial and temporal scales and are accompanied by mass displacements, which create temporal variations of the gravity field. Since 2002, satellite missions such as GOCE and GRACE provide an unprecedented view of the spatial and temporal variations of the Earth's gravity field. Gravity models built from these data are essential to study the Earth's dynamic processes. The gravity field and its time variations are usually modelled using spatial spherical harmonics functions averaged over a fixed period, as 10 days or 1 month. This approach is well suited for modeling global phenomena. To better estimate gravity variations related to local and/or transient processes, such as earthquakes or floods, and take into account the trade-off between temporal and spatial resolution resulting from the satellites sampling, we propose to model the gravity field as a four-dimensional quantity using localized functions in space and time. For that, we first design a four-dimensional multi-scale basis, well localized both in space and time, by combining spatial Poisson wavelets with an orthogonal temporal wavelet basis. In such approach, the temporal resolution can be adjusted to the spatial one. Then, we set-up the inverse problem to model potential differences between the twin GRACE satellites in 4D, and propose a regularization using prior knowledge on the water cycle signal amplitude. We validate our 4D modelling method on a synthetic test over Africa, using synthetic data on potential differences along the orbits constructed from a global hydrological model. A perspective of this work is to apply it on real data, in order to better model and understand the non-stationnary gravity field variations and associated processes at regional scales.

  20. Stability Analysis and H∞ Model Reduction for Switched Discrete-Time Time-Delay Systems

    Directory of Open Access Journals (Sweden)

    Zheng-Fan Liu

    2014-01-01

    Full Text Available This paper is concerned with the problem of exponential stability and H∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs. For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results.

  1. Modeling the time-changing dependence in stock markets

    International Nuclear Information System (INIS)

    Frezza, Massimiliano

    2012-01-01

    The time-changing dependence in stock markets is investigated by assuming the multifractional process with random exponent (MPRE) as model for actual log price dynamics. By modeling its functional parameter S(t, ω) via the square root process (S.R.) a twofold aim is obtained. From one hand both the main financial and statistical properties shown by the estimated S(t) are captured by surrogates, on the other hand this capability reveals able to model the time-changing dependence shown by stocks or indexes. In particular, a new dynamical approach to interpreter market mechanisms is given. Empirical evidences are offered by analysing the behaviour of the daily closing prices of a very known index, the Industrial Average Dow Jones (DJIA), beginning on March,1990 and ending on February, 2005.

  2. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  3. Real-time GPS Satellite Clock Error Prediction Based On No-stationary Time Series Model

    Science.gov (United States)

    Wang, Q.; Xu, G.; Wang, F.

    2009-04-01

    Analysis Centers of the IGS provide precise satellite ephemeris for GPS data post-processing. The accuracy of orbit products is better than 5cm, and that of the satellite clock errors (SCE) approaches 0.1ns (igscb.jpl.nasa.gov), which can meet with the requirements of precise point positioning (PPP). Due to the 13 day-latency of the IGS final products, only the broadcast ephemeris and IGS ultra rapid products (predicted) are applicable for real time PPP (RT-PPP). Therefore, development of an approach to estimate high precise GPS SCE in real time is of particular importance for RT-PPP. Many studies have been carried out for forecasting the corrections using models, such as Linear Model (LM), Quadratic Polynomial Model (QPM), Quadratic Polynomial Model with Cyclic corrected Terms (QPM+CT), Grey Model (GM) and Kalman Filter Model (KFM), etc. However, the precisions of these models are generally in nanosecond level. The purpose of this study is to develop a method using which SCE forecasting for RT-PPP can be reached with a precision of sub-nanosecond. Analysis of the last 8 years IGS SCE data shown that predicted precision depend on the stability of the individual satellite clock. The clocks of the most recent GPS satellites (BLOCK IIR and BLOCK IIR-M) are more stable than that of the former GPS satellites (BLOCK IIA). For the stable satellite clock, the next 6 hours SCE can be easily predict with LM. The residuals of unstable satellite clocks are periodic ones with noise components. Dominant periods of residuals are found by using Fourier Transform and Spectrum Analysis. For the rest part of the residuals, an auto-regression model is used to determine their systematic trends. Summarized from this study, a no-stationary time series model can be proposed to predict GPS SCE in real time. This prediction model includes: linear term, cyclic corrected terms and auto-regression term, which are used to represent SCE trend, cyclic parts and rest of the errors, respectively

  4. The Near Real Time Ionospheric Model of Latvia

    Science.gov (United States)

    Kaļinka, M.; Zvirgzds, J.; Dobelis, D.; Lazdāns, E.; Reiniks, M.

    2015-11-01

    A highly accurate ionosphere model is necessary to enable a fast and reliable coordinate determination with GNSS in real time. It is a partially ionized atmospheric region ranging up to 1,000 km height, affected by spatial variations, space weather, seasonal and solar cycle dependence. New approaches and algorithms of modelling techniques are sought to provide better solutions in the territory of Latvia. Ionospheric TEC value has large differences in Western Latvia and Eastern Latvia. Actual ionospheric map should be calculated and delivered to the surveyors near real time and published on the WEB. Delivering actual map to rover GNSS devices in a field will provide the surveyors with ionospheric conditions and allow choosing best time for surveying and making geodetic measurements with higher accuracy and reliability.

  5. Time-Dependent Toroidal Compactification Proposals and the Bianchi Type I Model: Classical and Quantum Solutions

    Directory of Open Access Journals (Sweden)

    L. Toledo Sesma

    2016-01-01

    Full Text Available We construct an effective four-dimensional model by compactifying a ten-dimensional theory of gravity coupled with a real scalar dilaton field on a time-dependent torus. This approach is applied to anisotropic cosmological Bianchi type I model for which we study the classical coupling of the anisotropic scale factors with the two real scalar moduli produced by the compactification process. Under this approach, we present an isotropization mechanism for the Bianchi I cosmological model through the analysis of the ratio between the anisotropic parameters and the volume of the Universe which in general keeps constant or runs into zero for late times. We also find that the presence of extra dimensions in this model can accelerate the isotropization process depending on the momenta moduli values. Finally, we present some solutions to the corresponding Wheeler-DeWitt (WDW equation in the context of standard quantum cosmology.

  6. A Circuit Model of Real Time Human Body Hydration.

    Science.gov (United States)

    Asogwa, Clement Ogugua; Teshome, Assefa K; Collins, Stephen F; Lai, Daniel T H

    2016-06-01

    Changes in human body hydration leading to excess fluid losses or overload affects the body fluid's ability to provide the necessary support for healthy living. We propose a time-dependent circuit model of real-time human body hydration, which models the human body tissue as a signal transmission medium. The circuit model predicts the attenuation of a propagating electrical signal. Hydration rates are modeled by a time constant τ, which characterizes the individual specific metabolic function of the body part measured. We define a surrogate human body anthropometric parameter θ by the muscle-fat ratio and comparing it with the body mass index (BMI), we find theoretically, the rate of hydration varying from 1.73 dB/min, for high θ and low τ to 0.05 dB/min for low θ and high τ. We compare these theoretical values with empirical measurements and show that real-time changes in human body hydration can be observed by measuring signal attenuation. We took empirical measurements using a vector network analyzer and obtained different hydration rates for various BMI, ranging from 0.6 dB/min for 22.7 [Formula: see text] down to 0.04 dB/min for 41.2 [Formula: see text]. We conclude that the galvanic coupling circuit model can predict changes in the volume of the body fluid, which are essential in diagnosing and monitoring treatment of body fluid disorder. Individuals with high BMI would have higher time-dependent biological characteristic, lower metabolic rate, and lower rate of hydration.

  7. Modelling blazar flaring using a time-dependent fluid jet emission model - an explanation for orphan flares and radio lags

    Science.gov (United States)

    Potter, William J.

    2018-01-01

    Blazar jets are renowned for their rapid violent variability and multiwavelength flares, however, the physical processes responsible for these flares are not well understood. In this paper, we develop a time-dependent inhomogeneous fluid jet emission model for blazars. We model optically thick radio flares for the first time and show that they are delayed with respect to the prompt optically thin emission by ∼months to decades, with a lag that increases with the jet power and observed wavelength. This lag is caused by a combination of the travel time of the flaring plasma to the optically thin radio emitting sections of the jet and the slow rise time of the radio flare. We predict two types of flares: symmetric flares - with the same rise and decay time, which occur for flares whose duration is shorter than both the radiative lifetime and the geometric path-length delay time-scale; extended flares - whose luminosity tracks the power of particle acceleration in the flare, which occur for flares with a duration longer than both the radiative lifetime and geometric delay. Our model naturally produces orphan X-ray and γ-ray flares. These are caused by flares that are only observable above the quiescent jet emission in a narrow band of frequencies. Our model is able to successfully fit to the observed multiwavelength flaring spectra and light curves of PKS1502+106 across all wavelengths, using a transient flaring front located within the broad-line region.

  8. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Forecast of useful energy for the TIMES-Norway model

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Eva

    2012-07-25

    A regional forecast of useful energy demand in seven Norwegian regions is calculated based on an earlier work with a national forecast. This forecast will be input to the energy system model TIMES-Norway and analyses will result in forecasts of energy use of different energy carriers with varying external conditions (not included in this report). The forecast presented here describes the methodology used and the resulting forecast of useful energy. lt is based on information of the long-term development of the economy by the Ministry of Finance, projections of population growths from Statistics Norway and several other studies. The definition of a forecast of useful energy demand is not absolute, but depends on the purpose. One has to be careful not to include parts that are a part of the energy system model, such as energy efficiency measures. In the forecast presented here the influence of new building regulations and the prohibition of production of incandescent light bulbs in EU etc. are included. Other energy efficiency measures such as energy management, heat pumps, tightening of leaks etc. are modelled as technologies to invest in and are included in the TIMES-Norway model. The elasticity between different energy carriers are handled by the TIMES-Norway model and some elasticity is also included as the possibility to invest in energy efficiency measures. The forecast results in an increase of the total useful energy from 2006 to 2050 by 18 o/o. The growth is expected to be highest in the regions South and East. The industry remains at a constant level in the base case and increased or reduced energy demand is analysed as different scenarios with the TIMES-Norway model. The most important driver is the population growth. Together with the assumptions made it results in increased useful energy demand in the household and service sectors of 25 o/o and 57 % respectively.(au)

  10. Space-time adaptive hierarchical model reduction for parabolic equations.

    Science.gov (United States)

    Perotto, Simona; Zilio, Alessandro

    Surrogate solutions and surrogate models for complex problems in many fields of science and engineering represent an important recent research line towards the construction of the best trade-off between modeling reliability and computational efficiency. Among surrogate models, hierarchical model (HiMod) reduction provides an effective approach for phenomena characterized by a dominant direction in their dynamics. HiMod approach obtains 1D models naturally enhanced by the inclusion of the effect of the transverse dynamics. HiMod reduction couples a finite element approximation along the mainstream with a locally tunable modal representation of the transverse dynamics. In particular, we focus on the pointwise HiMod reduction strategy, where the modal tuning is performed on each finite element node. We formalize the pointwise HiMod approach in an unsteady setting, by resorting to a model discontinuous in time, continuous and hierarchically reduced in space (c[M([Formula: see text])G( s )]-dG( q ) approximation). The selection of the modal distribution and of the space-time discretization is automatically performed via an adaptive procedure based on an a posteriori analysis of the global error. The final outcome of this procedure is a table, named HiMod lookup diagram , that sets the time partition and, for each time interval, the corresponding 1D finite element mesh together with the associated modal distribution. The results of the numerical verification confirm the robustness of the proposed adaptive procedure in terms of accuracy, sensitivity with respect to the goal quantity and the boundary conditions, and the computational saving. Finally, the validation results in the groundwater experimental setting are promising. The extension of the HiMod reduction to an unsteady framework represents a crucial step with a view to practical engineering applications. Moreover, the results of the validation phase confirm that HiMod approximation is a viable approach.

  11. Acceptability and feasibility of a low-cost, theory-based and co-produced intervention to reduce workplace sitting time in desk-based university employees

    OpenAIRE

    Mackenzie, Kelly; Goyder, Elizabeth; Eves, Francis

    2015-01-01

    Background Prolonged sedentary time is linked with poor health, independent of physical activity levels. Workplace sitting significantly contributes to sedentary time, but there is limited research evaluating low-cost interventions targeting reductions in workplace sitting. Current evidence supports the use of multi-modal interventions developed using participative approaches. This study aimed to explore the acceptability and feasibility of a low-cost, co-produced, multi-modal intervention to...

  12. Bio-physical modeling of time-resolved forward scattering by Listeria colonies

    Science.gov (United States)

    Bae, Euiwon; Banada, Padmapriya P.; Bhunia, Arun K.; Hirleman, E. Daniel

    2006-10-01

    We have developed a detection system and associated protocol based on optical forward scattering where the bacterial colonies of various species and strains growing on solid nutrient surfaces produced unique scatter signatures. The aim of the present investigation was to develop a bio-physical model for the relevant phenomena. In particular, we considered time-varying macroscopic morphological properties of the growing colonies and modeled the scattering using scalar diffraction theory. For the present work we performed detailed studies with three species of Listeria; L. innocua, L. monocytogenes, and L. ivanovii. The baseline experiments involved cultures grown on brain heart infusion (BHI) agar and the scatter images were captured every six hours for an incubation period of 42 hours. The morphologies of the colonies were studied by phase contrast microscopy, including measurement of the diameter of the colony. Growth curves, represented by colony diameter as a function of time, were compared with the time-evolution of scattering signatures. Similar studies were carried out with L. monocytogenes grown on different substrates. Non-dimensionalizing incubation time in terms of the time to reach stationary phase was effective in reducing the dimensionality of the model. Bio-physical properties of the colony such as diameter, bacteria density variation, surface curvature/profile, and transmission coefficient are important parameters in predicting the features of the forward scattering signatures. These parameters are included in a baseline model that treats the colony as a concentric structure with radial variations in phase modulation. In some cases azimuthal variations and random phase inclusions were included as well. The end result is a protocol (growth media, incubation time and conditions) that produces reproducible and distinguishable scatter patterns for a variety of harmful food borne pathogens in a short period of time. Further, the bio-physical model we

  13. Unsupervised Classification During Time-Series Model Building.

    Science.gov (United States)

    Gates, Kathleen M; Lane, Stephanie T; Varangis, E; Giovanello, K; Guiskewicz, K

    2017-01-01

    Researchers who collect multivariate time-series data across individuals must decide whether to model the dynamic processes at the individual level or at the group level. A recent innovation, group iterative multiple model estimation (GIMME), offers one solution to this dichotomy by identifying group-level time-series models in a data-driven manner while also reliably recovering individual-level patterns of dynamic effects. GIMME is unique in that it does not assume homogeneity in processes across individuals in terms of the patterns or weights of temporal effects. However, it can be difficult to make inferences from the nuances in varied individual-level patterns. The present article introduces an algorithm that arrives at subgroups of individuals that have similar dynamic models. Importantly, the researcher does not need to decide the number of subgroups. The final models contain reliable group-, subgroup-, and individual-level patterns that enable generalizable inferences, subgroups of individuals with shared model features, and individual-level patterns and estimates. We show that integrating community detection into the GIMME algorithm improves upon current standards in two important ways: (1) providing reliable classification and (2) increasing the reliability in the recovery of individual-level effects. We demonstrate this method on functional MRI from a sample of former American football players.

  14. The Time Is Right to Focus on Model Organism Metabolomes.

    Science.gov (United States)

    Edison, Arthur S; Hall, Robert D; Junot, Christophe; Karp, Peter D; Kurland, Irwin J; Mistrik, Robert; Reed, Laura K; Saito, Kazuki; Salek, Reza M; Steinbeck, Christoph; Sumner, Lloyd W; Viant, Mark R

    2016-02-15

    Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research.

  15. The Time Is Right to Focus on Model Organism Metabolomes

    Directory of Open Access Journals (Sweden)

    Arthur S. Edison

    2016-02-01

    Full Text Available Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research.

  16. Using 3D Printing (Additive Manufacturing) to Produce Low-Cost Simulation Models for Medical Training.

    Science.gov (United States)

    Lichtenberger, John P; Tatum, Peter S; Gada, Satyen; Wyn, Mark; Ho, Vincent B; Liacouras, Peter

    2018-03-01

    This work describes customized, task-specific simulation models derived from 3D printing in clinical settings and medical professional training programs. Simulation models/task trainers have an array of purposes and desired achievements for the trainee, defining that these are the first step in the production process. After this purpose is defined, computer-aided design and 3D printing (additive manufacturing) are used to create a customized anatomical model. Simulation models then undergo initial in-house testing by medical specialists followed by a larger scale beta testing. Feedback is acquired, via surveys, to validate effectiveness and to guide or determine if any future modifications and/or improvements are necessary. Numerous custom simulation models have been successfully completed with resulting task trainers designed for procedures, including removal of ocular foreign bodies, ultrasound-guided joint injections, nerve block injections, and various suturing and reconstruction procedures. These task trainers have been frequently utilized in the delivery of simulation-based training with increasing demand. 3D printing has been integral to the production of limited-quantity, low-cost simulation models across a variety of medical specialties. In general, production cost is a small fraction of a commercial, generic simulation model, if available. These simulation and training models are customized to the educational need and serve an integral role in the education of our military health professionals.

  17. Modelling travel and residence times in the eastern Irish Sea

    International Nuclear Information System (INIS)

    Dabrowski, T.; Hartnett, M.

    2008-01-01

    The Irish Sea, which lies between 51 deg. N-56 deg. N and 2 deg. 50'W-7 deg. W, provides a sheltered environment to exploit valuable fisheries resource. Anthropogenic activity is a real threat to its water quality. The majority of freshwater input down rivers flows into the eastern Irish Sea. The structure of the water circulation was not well understood during the planning of Sellafield nuclear plant outfall site in the eastern Irish Sea. A three-dimensional primitive equation numerical model was applied to the Irish Sea to simulate both barotropic and baroclinic circulation within the region. High accuracy was achieved with regard to the prediction of both tidal circulation and surface and nearbed water temperatures across the region. The model properly represented the Western Irish Sea Gyre, induced by thermal stratification and not known during planning Sellafield. Passive tracer simulations based on the developed hydrodynamic model were used to deliver residence times of the eastern Irish Sea region for various times of the year as well as travel times from the Sellafield outfall site to various locations within the Irish Sea. The results indicate a strong seasonal variability of travel times from Sellafield to the examined locations. Travel time to the Clyde Sea is the shortest for the autumnal tracer release (90 days); it takes almost a year for the tracer to arrive at the same location if it is released in January. Travel times from Sellafield to Dublin Bay fall within the range of 180-360 days. The average residence time of the entire eastern Irish Sea is around 7 months. The areas surrounding the Isle of Man are initially flushed due to a predominant northward flow; a backwater is formed in Liverpool Bay. Thus, elevated tracer concentrations are predicted in Liverpool Bay in the case of accidental spills at the Sellafield outfall site

  18. Interaction among competitive producers in the electricity market: An iterative market model for the strategic management of thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Carraretto, Cristian; Zigante, Andrea [University of Padova (Italy). Department of Mechanical Engineering

    2006-12-15

    The liberalization of the electricity sector requires utilities to develop sound operation strategies for their power plants. In this paper, attention is focused on the problem of optimizing the management of the thermal power plants belonging to a strategic producer that competes with other strategic companies and a set of smaller non-strategic ones in the day-ahead market. The market model suggested here determines an equilibrium condition over the selected period of analysis, in which no producer can increase profits by changing its supply offers given all rivals' bids. Power plants technical and operating constraints are considered. An iterative procedure, based on the dynamic programming, is used to find the optimum production plans of each producer. Some combinations of power plants and number of producers are analyzed, to simulate for instance the decommissioning of old expensive power plants, the installation of new more efficient capacity, the severance of large dominant producers into smaller utilities, the access of new producers to the market. Their effect on power plants management, market equilibrium, electricity quantities traded and prices is discussed. (author)

  19. Interaction among competitive producers in the electricity market: An iterative market model for the strategic management of thermal power plants

    International Nuclear Information System (INIS)

    Carraretto, Cristian; Zigante, Andrea

    2006-01-01

    The liberalization of the electricity sector requires utilities to develop sound operation strategies for their power plants. In this paper, attention is focused on the problem of optimizing the management of the thermal power plants belonging to a strategic producer that competes with other strategic companies and a set of smaller non-strategic ones in the day-ahead market. The market model suggested here determines an equilibrium condition over the selected period of analysis, in which no producer can increase profits by changing its supply offers given all rivals' bids. Power plants technical and operating constraints are considered. An iterative procedure, based on the dynamic programming, is used to find the optimum production plans of each producer. Some combinations of power plants and number of producers are analyzed, to simulate for instance the decommissioning of old expensive power plants, the installation of new more efficient capacity, the severance of large dominant producers into smaller utilities, the access of new producers to the market. Their effect on power plants management, market equilibrium, electricity quantities traded and prices is discussed. (author)

  20. Exactly solvable string models of curved space-time backgrounds

    CERN Document Server

    Russo, J.G.; Russo, J G; Tseytlin, A A

    1995-01-01

    We consider a new 3-parameter class of exact 4-dimensional solutions in closed string theory and solve the corresponding string model, determining the physical spectrum and the partition function. The background fields (4-metric, antisymmetric tensor, two Kaluza-Klein vector fields, dilaton and modulus) generically describe axially symmetric stationary rotating (electro)magnetic flux-tube type universes. Backgrounds of this class include both the dilatonic Melvin solution and the uniform magnetic field solution discussed earlier as well as some singular space-times. Solvability of the string sigma model is related to its connection via duality to a much simpler looking model which is a "twisted" product of a flat 2-space and a space dual to 2-plane. We discuss some physical properties of this model as well as a number of generalizations leading to larger classes of exact 4-dimensional string solutions.

  1. Thermodynamic modelling of an onsite methanation reactor for upgrading producer gas from commercial small scale biomass gasifiers.

    Science.gov (United States)

    Vakalis, S; Malamis, D; Moustakas, K

    2017-06-26

    Small scale biomass gasifiers have the advantage of having higher electrical efficiency in comparison to other conventional small scale energy systems. Nonetheless, a major drawback of small scale biomass gasifiers is the relatively poor quality of the producer gas. In addition, several EU Member States are seeking ways to store the excess energy that is produced from renewables like wind power and hydropower. A recent development is the storage of energy by electrolysis of water and the production of hydrogen in a process that is commonly known as "power-to-gas". The present manuscript proposes an onsite secondary reactor for upgrading producer gas by mixing it with hydrogen in order to initiate methanation reactions. A thermodynamic model has been developed for assessing the potential of the proposed methanation process. The model utilized input parameters from a representative small scale biomass gasifier and molar ratios of hydrogen from 1:0 to 1:4.1. The Villar-Cruise-Smith algorithm was used for minimizing the Gibbs free energy. The model returned the molar fractions of the permanent gases, the heating values and the Wobbe Index. For mixtures of hydrogen and producer gas on a 1:0.9 ratio the increase of the heating value is maximized with an increase of 78%. For ratios higher than 1:3, the Wobbe index increases significantly and surpasses the value of 30 MJ/Nm 3 . Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A new dry eye mouse model produced by exorbital and intraorbital lacrimal gland excision.

    Science.gov (United States)

    Shinomiya, Katsuhiko; Ueta, Mayumi; Kinoshita, Shigeru

    2018-01-24

    Chronic dry eye is an increasingly prevalent condition worldwide, with resulting loss of visual function and quality of life. Relevant, repeatable, and stable animal models of dry eye are still needed. We have developed an improved surgical mouse model for dry eye based on severe aqueous fluid deficiency, by excising both the exorbital and intraorbital lacrimal glands (ELG and ILG, respectively) of mice. After ELG plus ILG excision, dry eye symptoms were evaluated using fluorescein infiltration observation, tear production measurement, and histological evaluation of ocular surface. Tear production in the model mice was significantly decreased compared with the controls. The corneal fluorescein infiltration score of the model mice was also significantly increased compared with the controls. Histological examination revealed significant severe inflammatory changes in the cornea, conjunctiva or meibomian glands of the model mice after surgery. In the observation of LysM-eGFP (+/-) mice tissues, postsurgical infiltration of green fluorescent neutrophils was observed in the ocular surface tissues. We theorize that the inflammatory changes on the ocular surface of this model were induced secondarily by persistent severe tear reduction. The mouse model will be useful for investigations of both pathophysiology as well as new therapies for tear-volume-reduction type dry eye.

  3. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  4. Stochastic Modeling Approach to the Incubation Time of Prionic Diseases

    Science.gov (United States)

    Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.

    2003-05-01

    Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.

  5. Waste collection multi objective model with real time traceability data.

    Science.gov (United States)

    Faccio, Maurizio; Persona, Alessandro; Zanin, Giorgia

    2011-12-01

    Waste collection is a highly visible municipal service that involves large expenditures and difficult operational problems, plus it is expensive to operate in terms of investment costs (i.e. vehicles fleet), operational costs (i.e. fuel, maintenances) and environmental costs (i.e. emissions, noise and traffic congestions). Modern traceability devices, like volumetric sensors, identification RFID (Radio Frequency Identification) systems, GPRS (General Packet Radio Service) and GPS (Global Positioning System) technology, permit to obtain data in real time, which is fundamental to implement an efficient and innovative waste collection routing model. The basic idea is that knowing the real time data of each vehicle and the real time replenishment level at each bin makes it possible to decide, in function of the waste generation pattern, what bin should be emptied and what should not, optimizing different aspects like the total covered distance, the necessary number of vehicles and the environmental impact. This paper describes a framework about the traceability technology available in the optimization of solid waste collection, and introduces an innovative vehicle routing model integrated with the real time traceability data, starting the application in an Italian city of about 100,000 inhabitants. The model is tested and validated using simulation and an economical feasibility study is reported at the end of the paper. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Mathematical model of tuberculosis epidemic with recovery time delay

    Science.gov (United States)

    Iskandar, Taufiq; Chaniago, Natasya Ayuningtia; Munzir, Said; Halfiani, Vera; Ramli, Marwan

    2017-12-01

    Tuberculosis (TB) is a contagious disease which can cause death. The disease is caused by Mycobacterium Tuberculosis which generally affects lungs and other organs such as lymph gland, intestine, kidneys, uterus, bone, and brain. The spread of TB occurs through the bacteria-contaminated air which is inhaled into the lungs. The symptoms of the TB patients are cough, chest pain, shortness of breath, appetite lose, weight lose, fever, cold, and fatigue. World Health Organization (WHO) reported that Indonesia placed the second in term of the most TB cases after India which has 23 % cases while China is reported to have 10 % cases in global. TB has become one of the greatest death threats in global. One way to countermeasure TB disease is by administering vaccination. However, a medication is needed when one has already infected. The medication can generally take 6 months of time which consists of two phases, inpatient and outpatient. Mathematical models to analyze the spread of TB have been widely developed. One of them is the SEIR type model. In this model the population is divided into four groups, which are suspectible (S), exposed (S), infected (I), recovered (R). In fact, a TB patient needs to undergo medication with a period of time in order to recover. This article discusses a model of TB spread with considering the term of recovery (time delay). The model is developed in SIR type where the population is divided into three groups, suspectible (S), infected (I), and recovered (R). Here, the vaccine is given to the susceptible group and the time delay is considered in the group undergoing the medication.

  7. D Model Visualization Enhancements in Real-Time Game Engines

    Science.gov (United States)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including Direct

  8. Modeling optical and UV polarization of AGNs. IV. Polarization timing

    Science.gov (United States)

    Rojas Lobos, P. A.; Goosmann, R. W.; Marin, F.; Savić, D.

    2018-03-01

    Context. Optical observations cannot resolve the structure of active galactic nuclei (AGN), and a unified model for AGN was inferred mostly from indirect methods, such as spectroscopy and variability studies. Optical reverberation mapping allowed us to constrain the spatial dimension of the broad emission line region and thereby to measure the mass of supermassive black holes. Recently, reverberation was also applied to the polarized signal emerging from different AGN components. In principle, this should allow us to measure the spatial dimensions of the sub-parsec reprocessing media. Aim. We conduct numerical modeling of polarization reverberation and provide theoretical predictions for the polarization time lag induced by different AGN components. The model parameters are adjusted to the observational appearance of the Seyfert 1 galaxy NGC 4151. Methods: We modeled scattering-induced polarization and tested different geometries for the circumnuclear dust component. Our tests included the effects of clumpiness and different dust prescriptions. To further extend the model, we also explored the effects of additional ionized winds stretched along the polar direction, and of an equatorial scattering ring that is responsible for the polarization angle observed in pole-on AGN. The simulations were run using a time-dependent version of the STOKES code. Results: Our modeling confirms the previously found polarization characteristics as a function of the observer`s viewing angle. When the dust adopts a flared-disk geometry, the lags reveal a clear difference between type 1 and type 2 AGN. This distinction is less clear for a torus geometry where the time lag is more sensitive to the geometry and optical depth of the inner surface layers of the funnel. The presence of a scattering equatorial ring and ionized outflows increased the recorded polarization time lags, and the polar outflows smooths out dependence on viewing angle, especially for the higher optical depth of the

  9. Price-Maker Wind Power Producer Participating in a Joint Day-Ahead and Real-Time Market

    DEFF Research Database (Denmark)

    Delikaraoglou, Stefanos; Papakonstantinou, Athanasios; Ordoudis, Christos

    2015-01-01

    The large scale integration of stochastic renewable energy introduces significant challenges for power system operators and disputes the efficiency of the current market design. Recent research embeds the uncertain nature of renewable sources by modelling electricity markets as a two...... Constraints (MPEC) that is reformulated as a single-level Mixed-Integer Linear Program (MILP), which can be readily solved. Our analysis shows that adopting strategic behaviour may improve producer’s expected profit as the share of wind power increases. However, this incentive diminishes in power systems...... where available flexible capacity is high enough to ensure an efficient market operation....

  10. A Black-box Modelling Engine for Discharge Produced Plasma Radiation Sources

    International Nuclear Information System (INIS)

    Zakharov, S.V.; Choi, P.; Krukovskiy, A.Y.; Zhang, Q.; Novikov, V.G.; Zakharov, V.S.

    2006-01-01

    A Blackbox Modelling Engine (BME), is an instrument based on the adaptation of the RMHD code Z*, integrated into a specific computation environment to provide a turn key simulation instrument and to enable routine plasma modelling without specialist knowledge in numerical computation. Two different operating modes are provided: Detailed Physics mode and Fast Numerics mode. In the Detailed Physics mode, non-stationary, non-equilibrium radiation physics have been introduced to allow the modelling of transient plasmas in experimental geometry. In the Fast Numerics mode, the system architecture and the radiation transport is simplified to significantly accelerate the computation rate. The Fast Numerics mode allows the BME to be used realistically in parametric scanning to explore complex physical set up, before using the Detailed Physics mode. As an example of the results from the BME modelling, the EUV source plasma dynamics in the pulsed capillary discharge are presented

  11. The importance of time-stepping errors in ocean models

    Science.gov (United States)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  12. Adsorption of Pb(II and Cu(II by Ginkgo-Leaf-Derived Biochar Produced under Various Carbonization Temperatures and Times

    Directory of Open Access Journals (Sweden)

    Myoung-Eun Lee

    2017-12-01

    Full Text Available Ginkgo trees are common street trees in Korea, and the large amounts of leaves that fall onto the streets annually need to be cleaned and treated. Therefore, fallen gingko leaves have been used as a raw material to produce biochar for the removal of heavy metals from solutions. Gingko-leaf-derived biochar was produced under various carbonization temperatures and times. This study evaluated the physicochemical properties and adsorption characteristics of gingko-leaf-derived biochar samples produced under different carbonization conditions regarding Pb(II and Cu(II. The biochar samples that were produced at 800 °C for 90 and 120 min contained the highest oxygen- and nitrogen-substituted carbons, which might contribute to a high metal-adsorption rate. The intensity of the phosphate bond was increased with the increasing of the carbonization temperature up to 800 °C and after 90 min of carbonization. The Pb(II and Cu(II adsorption capacities were the highest when the gingko-leaf-derived biochar was produced at 800 °C, and the removal rates were 99.2% and 34.2%, respectively. The highest removal rate was achieved when the intensity of the phosphate functional group in the biochar was the highest. Therefore, the gingko-leaf-derived biochar produced at 800 °C for 90 min can be used as an effective bio-adsorbent in the removal of metals from solutions.

  13. Exactly solvable string models of curved space-time backgrounds

    International Nuclear Information System (INIS)

    Russo, J.G.

    1995-01-01

    We consider a new 3-parameter class of exact 4-dimensional solutions in closed string theory and solve the corresponding string model, determining the physical spectrum and the partition function. The background fields (4-metric, antisymmetric tensor, two Kaluza-Klein vector fields, dilaton and modulus) generically describe axially symmetric stationary rotating (electro)magnetic flux-tube type universes. Backgrounds of this class include both the ''dilatonic'' (a=1) and ''Kaluza-Klein'' (a=√(3)) Melvin solutions and the uniform magnetic field solution, as well as some singular space-times. Solvability of the string σ-model is related to its connection via duality to a simpler model which is a ''twisted'' product of a flat 2-space and a space dual to 2-plane. We discuss some physical properties of this model (tachyonic instabilities in the spectrum, gyromagnetic ratio, issue of singularities, etc.). It provides one of the first examples of a consistent solvable conformal string model with explicit D=4 curved space-time interpretation. (orig.)

  14. Multiple-relaxation-time lattice Boltzmann model for compressible fluids

    International Nuclear Information System (INIS)

    Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun

    2011-01-01

    We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.

  15. Modelling Social-Technical Attacks with Timed Automata

    DEFF Research Database (Denmark)

    David, Nicolas; David, Alexandre; Hansen, Rene Rydhof

    2015-01-01

    Attacks on a system often exploit vulnerabilities that arise from human behaviour or other human activity. Attacks of this type, so-called socio-technical attacks, cover everything from social engineering to insider attacks, and they can have a devastating impact on an unprepared organisation....... In this paper we develop an approach towards modelling socio-technical systems in general and socio-technical attacks in particular, using timed automata and illustrate its application by a complex case study. Thanks to automated model checking and automata theory, we can automatically generate possible attacks...

  16. UPPAAL-SMC: Statistical Model Checking for Priced Timed Automata

    DEFF Research Database (Denmark)

    Bulychev, Petr; David, Alexandre; Larsen, Kim Guldstrand

    2012-01-01

    in the form of probability distributions and compare probabilities to analyze performance aspects of systems. The focus of the survey is on the evolution of the tool – including modeling and specification formalisms as well as techniques applied – together with applications of the tool to case studies....... on a series of extensions of the statistical model checking approach generalized to handle real-time systems and estimate undecidable problems. U PPAAL - SMC comes together with a friendly user interface that allows a user to specify complex problems in an efficient manner as well as to get feedback...

  17. Investment timing decisions in a stochastic duopoly model

    Energy Technology Data Exchange (ETDEWEB)

    Marseguerra, Giovanni [Istituto di Econometria e CRANEC, Universita Cattolica del Sacro Cuore di Milan (Italy)]. E-mail: giovanni.marseguerra@unicatt.it; Cortelezzi, Flavia [Dipartimento di Diritto ed Economia delle Persone e delle Imprese, Universita dell' Insubria (Italy)]. E-mail: flavia.cortelezzi@uninsubria.it; Dominioni, Armando [CORE-Catholique de Louvain la Neuve (Belgium)]. E-mail: dominioni@core.ucl.ac.be

    2006-08-15

    We investigate the role of strategic considerations on the optimal timing of investment when firms compete for a new market (e.g., the provision of an innovative product) under demand uncertainty. Within a continuous time model of stochastic oligopoly, we show that strategic considerations are likely to be of limited impact when the new product is radically innovative whilst the fear of a rival's entry may deeply affect firms' decisions whenever innovation is to some extent limited. The welfare analysis shows surprisingly that the desirability of the different market structures considered does not depend on the fixed entry cost.

  18. Investment timing decisions in a stochastic duopoly model

    International Nuclear Information System (INIS)

    Marseguerra, Giovanni; Cortelezzi, Flavia; Dominioni, Armando

    2006-01-01

    We investigate the role of strategic considerations on the optimal timing of investment when firms compete for a new market (e.g., the provision of an innovative product) under demand uncertainty. Within a continuous time model of stochastic oligopoly, we show that strategic considerations are likely to be of limited impact when the new product is radically innovative whilst the fear of a rival's entry may deeply affect firms' decisions whenever innovation is to some extent limited. The welfare analysis shows surprisingly that the desirability of the different market structures considered does not depend on the fixed entry cost

  19. Testing for time-varying loadings in dynamic factor models

    DEFF Research Database (Denmark)

    Mikkelsen, Jakob Guldbæk

    Abstract: In this paper we develop a test for time-varying factor loadings in factor models. The test is simple to compute and is constructed from estimated factors and residuals using the principal components estimator. The hypothesis is tested by regressing the squared residuals on the squared...... factors. The squared correlation coefficient times the sample size has a limiting chi-squared distribution. The test can be made robust to serial correlation in the idiosyncratic errors. We find evidence for factor loadings variance in over half of the variables in a dataset for the US economy, while...

  20. Improving real-time inflow forecasting into hydropower reservoirs through a complementary modelling framework

    Science.gov (United States)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.

    2015-08-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead time is considered within the day-ahead (Elspot) market of the Nordic exchange market. A complementary modelling framework presents an approach for improving real-time forecasting without needing to modify the pre-existing forecasting model, but instead formulating an independent additive or complementary model that captures the structure the existing operational model may be missing. We present here the application of this principle for issuing improved hourly inflow forecasts into hydropower reservoirs over extended lead times, and the parameter estimation procedure reformulated to deal with bias, persistence and heteroscedasticity. The procedure presented comprises an error model added on top of an unalterable constant parameter conceptual model. This procedure is applied in the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead times up to 17 h. Evaluation of the percentage of observations bracketed in the forecasted 95 % confidence interval indicated that the degree of success in containing 95 % of the observations varies across seasons and hydrologic years.

  1. Modeling nonlinear time-dependent treatment effects: an application of the generalized time-varying effect model (TVEM).

    Science.gov (United States)

    Shiyko, Mariya P; Burkhalter, Jack; Li, Runze; Park, Bernard J

    2014-10-01

    The goal of this article is to introduce to social and behavioral scientists the generalized time-varying effect model (TVEM), a semiparametric approach for investigating time-varying effects of a treatment. The method is best suited for data collected intensively over time (e.g., experience sampling or ecological momentary assessments) and addresses questions pertaining to effects of treatment changing dynamically with time. Thus, of interest is the description of timing, magnitude, and (nonlinear) patterns of the effect. Our presentation focuses on practical aspects of the model. A step-by-step demonstration is presented in the context of an empirical study designed to evaluate effects of surgical treatment on quality of life among early stage lung cancer patients during posthospitalization recovery (N = 59; 61% female, M age = 66.1 years). Frequency and level of distress associated with physical symptoms were assessed twice daily over a 2-week period, providing a total of 1,544 momentary assessments. Traditional analyses (analysis of covariance [ANCOVA], repeated-measures ANCOVA, and multilevel modeling) yielded findings of no group differences. In contrast, generalized TVEM identified a pattern of the effect that varied in time and magnitude. Group differences manifested after Day 4. Generalized TVEM is a flexible statistical approach that offers insight into the complexity of treatment effects and allows modeling of nonnormal outcomes. The practical demonstration, shared syntax, and availability of a free set of macros aim to encourage researchers to apply TVEM to complex data and stimulate important scientific discoveries. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  2. Real-time modeling of complex atmospheric releases in urban areas

    International Nuclear Information System (INIS)

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    1994-08-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralized dispersion modeling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models

  3. Real-time modelling of complex atmospheric releases in urban areas

    International Nuclear Information System (INIS)

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    2000-01-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralised dispersion modelling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models. (author)

  4. Modeling Mechanical Properties of Aluminum Composite Produced Using Stir Casting Method

    Directory of Open Access Journals (Sweden)

    Muhammad Hayat Jokhio

    2011-01-01

    Full Text Available ANN (Artificial Neural Networks modeling methodology was adopted for predicting mechanical properties of aluminum cast composite materials. For this purpose aluminum alloy were developed using conventional foundry method. The composite materials have complex nature which posses the nonlinear relationship among heat treatment, processing parameters, and composition and affects their mechanical properties. These nonlinear relation ships with properties can more efficiently be modeled by ANNs. Neural networks modeling needs sufficient data base consisting of mechanical properties, chemical composition and processing parameters. Such data base is not available for modeling. Therefore, a large range of experimental work was carried out for the development of aluminum composite materials. Alloys containing Cu, Mg and Zn as matrix were reinforced with 1- 15% Al2O3 particles using stir casting method. Alloys composites were cast in a metal mold. More than eighty standard samples were prepared for tensile tests. Sixty samples were given solution treatments at 580oC for half an hour and tempered at 120oC for 24 hours. The samples were characterized to investigate mechanical properties using Scanning Electron Microscope, X-Ray Spectrometer, Optical Metallurgical Microscope, Vickers Hardness, Universal Testing Machine and Abrasive Wear Testing Machine. A MLP (Multilayer Perceptron feedforward was developed and used for modeling purpose. Training, testing and validation of the model were carried out using back propagation learning algorithm. The modeling results show that an architecture of 14 inputs with 9 hidden neurons and 4 outputs which includes the tensile strength, elongation, hardness and abrasive wear resistance gives reasonably accurate results with an error within the range of 2-7 % in training, testing and validation.

  5. Reaction kinetics and reactor modeling for fuel processing of liquid hydrocarbons to produce hydrogen. Isooctane reforming

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco, Manuel [Department of Refining and Petrochemicals, Center for Research and Development of the Venezuelan Oil Industry (PDVSA-Intevep), Sector el Tambor, P.O. Box 76343, Los Teques, Edo Miranda (Venezuela); Sira, Jorge [Department of Mechanical Engineering, Universidad de los Andes, Merida (Venezuela); Kopasz, John [US Department of Energy, Chemical Technology Division, Argonne National Laboratory, Argonne, IL 60439 (United States)

    2003-09-10

    A mathematical model was developed in the framework of the process simulator Aspen Plus in order to describe the reaction kinetics and performance of a fuel processor used for autothermal reforming of liquid hydrocarbons. Experimental results obtained in the facilities of Argonne National Laboratories (ANL) when reforming isooctane using a ceria-oxide catalyst impregnated with platinum were used in order to validate the reactor model. The reaction kinetics and reaction schemes were taken from published literature and most of the chemical reactions were modeled using the Langmuir-Hinshelwood-Hougen-Watson (LHHW) formulation to account for the effect of adsorption of reactants and products on the active sites of the catalyst. The water-gas-shift (WGS) reactor used to reduce the concentration of CO in the reformate was also modeled. Both reactor models use a simplified formulation for estimating the effectiveness factor of each chemical reaction in order to account for the effect of intraparticle mass transfer limitations on the reactor performance. Since the data in the literature on kinetics of autothermal reforming of liquid hydrocarbons using CeO{sub 2}-Pt catalyst is scarce, the proposed kinetic model for the reaction network was coupled to the sequential quadratic programming (SQP) algorithm implemented in Aspen Plus in order to regress the kinetic constants for the different reactions. The model describes the trend of the experimental data in terms of hydrogen yield and distribution of products with a relative deviation of {+-}15% for reforming temperatures between 600 and 800C and reactor space velocities between 15000 and 150000h{sup -1}.

  6. Real Time Fire Reconnaissance Satellite Monitoring System Failure Model

    Science.gov (United States)

    Nino Prieto, Omar Ariosto; Colmenares Guillen, Luis Enrique

    2013-09-01

    In this paper the Real Time Fire Reconnaissance Satellite Monitoring System is presented. This architecture is a legacy of the Detection System for Real-Time Physical Variables which is undergoing a patent process in Mexico. The methodologies for this design are the Structured Analysis for Real Time (SA- RT) [8], and the software is carried out by LACATRE (Langage d'aide à la Conception d'Application multitâche Temps Réel) [9,10] Real Time formal language. The system failures model is analyzed and the proposal is based on the formal language for the design of critical systems and Risk Assessment; AltaRica. This formal architecture uses satellites as input sensors and it was adapted from the original model which is a design pattern for physical variation detection in Real Time. The original design, whose task is to monitor events such as natural disasters and health related applications, or actual sickness monitoring and prevention, as the Real Time Diabetes Monitoring System, among others. Some related work has been presented on the Mexican Space Agency (AEM) Creation and Consultation Forums (2010-2011), and throughout the International Mexican Aerospace Science and Technology Society (SOMECYTA) international congress held in San Luis Potosí, México (2012). This Architecture will allow a Real Time Fire Satellite Monitoring, which will reduce the damage and danger caused by fires which consumes the forests and tropical forests of Mexico. This new proposal, permits having a new system that impacts on disaster prevention, by combining national and international technologies and cooperation for the benefit of humankind.

  7. Modelling Time-Varying Volatility in Financial Returns

    DEFF Research Database (Denmark)

    Amado, Cristina; Laakkonen, Helinä

    2014-01-01

    The “unusually uncertain” phase in the global financial markets has inspired many researchers to study the effects of ambiguity (or “Knightian uncertainty”) on the decisions made by investors and their implications for the capital markets. We contribute to this literature by using a modified...... version of the time-varying GARCH model of Amado and Teräsvirta (2013) to analyze whether the increasing uncertainty has caused excess volatility in the US and European government bond markets. In our model, volatility is multiplicatively decomposed into two time-varying conditional components: the first...... being captured by a stable GARCH(1,1) process and the second driven by the level of uncertainty in the financial market....

  8. Measuring and Modeling Sound Interference and Reverberation Time in Classrooms

    Science.gov (United States)

    Gumina, Kaitlyn; Martell, Eric

    2015-04-01

    Research shows that children, even those without hearing difficulties, are affected by poor classroom acoustics, especially children with hearing loss, learning disabilities, speech delay, and attention problems. Poor acoustics can come in a variety of forms, including destructive interference causing ``dead spots'' and extended Reverberation Times (RT), where echoes persist too long, interfering with further speech. In this research, I measured sound intensity at locations throughout three different types of classrooms at frequencies commonly associated with human speech to see what effect seating position has on intensity. I also used a program called Wave Cloud to model the time necessary for intensity to decrease by 60 decibels (RT50), both in idealized classrooms and in classrooms modeled on the ones I studied.

  9. 2D Time-lapse Resistivity Monitoring of an Organic Produced Gas Plume in a Landfill using ERT.

    Science.gov (United States)

    Amaral, N. D.; Mendonça, C. A.; Doherty, R.

    2014-12-01

    This project has the objective to study a landfill located on the margins of Tietê River, in São Paulo, Brazil, using the electroresistivity tomography method (ERT). Due to huge organic matter concentrations in the São Paulo Basin quaternary sediments, there is subsurface depth related biogas accumulation (CH4 and CO2), induced by anaerobic degradation of the organic matter. 2D resistivity sections were obtained from a test area since March 2012, a total of 7 databases, being the last one dated from October 2013. The studied line has the length of 56m, the electrode interval is of 2m. In addition, there are two boreholes along the line (one with 3 electrodes and the other one with 2) in order to improve data quality and precision. The boreholes also have a multi-level sampling system that indicates the fluid (gas or water) presence in relation to depth. With our results it was possible to map the gas plume position and its area of extension in the sections as it is a positive resistivity anomaly, with the gas level having approximately 5m depth. With the time-lapse analysis (Matlab script) between the obtained 2D resistivity sections from the site, it was possible to map how the biogas volume and position change in the landfill in relation to time. Our preliminary results show a preferential gas pathway through the subsurface studied area. A consistent relation between the gas depth and obtained microbiological data from archea and bacteria population was also observed.

  10. Detection of MRI artifacts produced by intrinsic heart motion using a saliency model

    Science.gov (United States)

    Salguero, Jennifer; Velasco, Nelson; Romero, Eduardo

    2017-11-01

    Cardiac Magnetic Resonance (CMR) requires synchronization with the ECG to correct many types of noise. However, the complex heart motion frequently produces displaced slices that have to be either ignored or manually corrected since the ECG correction is useless in this case. This work presents a novel methodology that detects the motion artifacts in CMR using a saliency method that highlights the region where the heart chambers are located. Once the Region of Interest (RoI) is set, its center of gravity is determined for the set of slices composing the volume. The deviation of the gravity center is an estimation of the coherence between the slices and is used to find out slices with certain displacement. Validation was performed with distorted real images where a slice is artificially misaligned with respect to set of slices. The displaced slice is found with a Recall of 84% and F Score of 68%.

  11. Grafting fibroblasts genetically modified to produce L-dopa in a rat model of Parkinson disease

    International Nuclear Information System (INIS)

    Wolff, J.A.; Fisher, L.J.; Xu, L.; Jinnah, H.A.; Rosenberg, M.B.; Shimohama, S.; Gage, F.H.; Langlais, P.J.; Iuvone, P.M.; O'Malley, K.L.

    1989-01-01

    Rat fibroblasts were infected with a retroviral vector containing the cDNA for rat tyrosine hydroxylase. A TH-positive clone was identified by biochemical assay and immunohistochemical staining. When supplemented in vitro with pterin cofactors required for TH activity, these cells produced L-dopa and released it into the cell cultured medium. Uninfected control cells and fibroblasts infected with the TH vector were grafted separately to the caudate of rats with unilateral 6-hydroxydopamine lesions of the nigrostriatal pathway. Only grafts containing TH-expressing fibroblasts were found to reduce rotational asymmetry. These results have general implications for the application of gene therapy to human neurological disease and specific implications for Parkinson disease

  12. Using modeling and vicarious reinforcement to produce more positive attitudes toward mental health treatment.

    Science.gov (United States)

    Buckley, Gary I; Malouff, John M

    2005-05-01

    In this study, the authors evaluated the effectiveness of a video, developed for this study and using principles of cognitive learning theory, to produce positive attitudinal change toward mental health treatment. The participants were 35 men and 45 women who were randomly assigned to watch either an experimental video, which included 3 positive 1st-person accounts of psychotherapy or a control video that focused on the psychological construct of self. Pre-intervention, post-intervention, and 2-week follow-up levels of attitude toward mental health treatment were measured using the Attitude Toward Seeking Professional Help Scale (E. H. Fischer & J. L. Turner, 1970). The experimental video group showed a significantly greater increase in positive attitude than did the control group. These results support the effectiveness of using the vicarious reinforcement elements of cognitive learning theory as a basis for changing attitudes toward mental health treatment.

  13. Dynamical analysis of a toxin-producing phytoplankton-zooplankton model with refuge.

    Science.gov (United States)

    Li, Juan; Song, Yongzhong; Wan, Hui

    2017-04-01

    To study the impacts of toxin produced by phytoplankton and refuges provided for phytoplankton on phytoplankton-zooplankton interactions in lakes, we establish a simple phytoplankton-zooplankton system with Holling type II response function. The existence and stability of positive equilibria are discussed. Bifurcation analyses are given by using normal form theory which reveals reasonably the mechanisms and nonlinear dynamics of the effects of toxin and refuges, including Hopf bifurcation, Bogdanov-Takens bifurcation of co-dimension 2 and 3. Numerical simulations are carried out to intuitively support our analytical results and help to explain the observed biological behaviors. Our findings finally show that both phytoplankton refuge and toxin have a significant impact on the occurring and terminating of algal blooms in freshwater lakes.

  14. Deriving dynamic marketing effectiveness from econometric time series models

    OpenAIRE

    Horváth, C.; Franses, Ph.H.B.F.

    2003-01-01

    textabstractTo understand the relevance of marketing efforts, it has become standard practice to estimate the long-run and short-run effects of the marketing-mix, using, say, weekly scanner data. A common vehicle for this purpose is an econometric time series model. Issues that are addressed in the literature are unit roots, cointegration, structural breaks and impulse response functions. In this paper we summarize the most important concepts by reviewing all possible empirical cases that can...

  15. Bayesian Modelling of fMRI Time Series

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward

    2000-01-01

    We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...... Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments....

  16. Estimation of Continuous Time Models in Economics: an Overview

    OpenAIRE

    Clifford R. Wymer

    2009-01-01

    The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financia...

  17. TIME-DEPENDENT MODELS OF FLARES FROM SAGITTARIUS A*

    International Nuclear Information System (INIS)

    Dodds-Eden, Katie; Genzel, Reinhard; Gillessen, Stefan; Eisenhauer, Frank; Sharma, Prateek; Quataert, Eliot; Porquet, Delphine

    2010-01-01

    The emission from Sgr A*, the supermassive black hole in the Galactic Center, shows order of magnitude variability ('flares') a few times a day that is particularly prominent in the near-infrared (NIR) and X-rays. We present a time-dependent model for these flares motivated by the hypothesis that dissipation of magnetic energy powers the flares. We show that episodic magnetic reconnection can occur near the last stable circular orbit in time-dependent magnetohydrodynamic simulations of black hole accretion-the timescales and energetics of these events are broadly consistent with the flares from Sgr A*. Motivated by these results, we present a spatially one-zone time-dependent model for the electron distribution function in flares, including energy loss due to synchrotron cooling and adiabatic expansion. Synchrotron emission from transiently accelerated particles can explain the NIR/X-ray light curves and spectra of a luminous flare observed on 2007 April 4. A significant decrease in the magnetic field strength during the flare (coincident with the electron acceleration) is required to explain the simultaneity and symmetry of the simultaneous light curves. Our models predict that the NIR and X-ray spectral indices are related by Δα ≅ 0.5 (where νF ν ∝ ν α ) and that there is only modest variation in the spectral index during flares. We also explore implications of this model for longer wavelength (radio-submillimeter) emission seemingly associated with X-ray and NIR flares; we argue that a few hour decrease in the submillimeter emission is a more generic consequence of large-scale magnetic reconnection than delayed radio emission from adiabatic expansion.

  18. Inverse Modeling of Emissions and their Time Profiles

    Czech Academy of Sciences Publication Activity Database

    Resler, Jaroslav; Eben, Kryštof; Juruš, Pavel; Liczki, Jitka

    2010-01-01

    Roč. 1, č. 4 (2010), s. 288-295 ISSN 1309-1042 R&D Projects: GA MŽP SP/1A4/107/07 Grant - others:COST(XE) ES0602 Institutional research plan: CEZ:AV0Z10300504 Keywords : 4DVar * inverse modeling * diurnal time profile of emission * CMAQ adjoint * satellite observations Subject RIV: DG - Athmosphere Sciences, Meteorology

  19. Interpretation of Snow-Climate Feedback as Produced by 17 General Circulation Models

    Science.gov (United States)

    Cess, R. D.; Potter, G. L.; Zhang, M.-H.; Blanchet, J.-P.; Chalita, S.; Colman, R.; Dazlich, D. A.; del Genio, A. D.; Dymnikov, V.; Galin, V.; Jerrett, D.; Keup, E.; Lacis, A. A.; Le Treut, H.; Liang, X.-Z.; Mahfouf, J.-F.; McAvaney, B. J.; Meleshko, V. P.; Mitchell, J. F. B.; Morcrette, J.-J.; Norris, P. M.; Randall, D. A.; Rikus, L.; Roeckner, E.; Royer, J.-F.; Schlese, U.; Sheinin, D. A.; Slingo, J. M.; Sokolov, A. P.; Taylor, K. E.; Washington, W. M.; Wetherald, R. T.; Yagai, I.

    1991-08-01

    Snow feedback is expected to amplify global warming caused by increasing concentrations of atmospheric greenhouse gases. The conventional explanation is that a warmer Earth will have less snow cover, resulting in a darker planet that absorbs more solar radiation. An intercomparison of 17 general circulation models, for which perturbations of sea surface temperature were used as a surrogate climate change, suggests that this explanation is overly simplistic. The results instead indicate that additional amplification or moderation may be caused both by cloud interactions and longwave radiation. One measure of this net effect of snow feedback was found to differ markedly among the 17 climate models, ranging from weak negative feedback in some models to strong positive feedback in others.

  20. Technical cost modelling for a generic 45-m wind turbine blade produced by vacuum infusion (VI)

    International Nuclear Information System (INIS)

    A detailed technical cost analysis has been conducted on a generic 45-m wind turbine blade manufactured using the vacuum infusion (VI) process, in order to isolate areas of significant cost savings. The analysis has focused on a high labour cost environment such as the UK and investigates the influence of varying labour costs, programme life, component area, deposition time, cure time and reinforcement price with respect to production volume. A split of the cost centres showed the dominance of material and labour costs at approximately 51% and 41%, respectively. Due to the dominance of materials, it was shown that fluctuations in reinforcement costs can easily increase or decrease the cost of a turbine blade by up to 14%. Similarly, improving material deposition time by 2 h can save approximately 5% on the total blade cost. However, saving 4 h on the cure cycle only has the potential to provide a 2% cost saving. (author)

  1. Distribution and Orientation of Carbon Fibers in Polylactic Acid Parts Produced by Fused Deposition Modeling

    DEFF Research Database (Denmark)

    Hofstätter, Thomas; W. Gutmann, Ingomar; Koch, Thomas

    2016-01-01

    The aim of this paper is the understanding of the fiber orientation by investigations in respect to the inner configuration of a polylactic acid matrix reinforced with short carbon fibers after a fused deposition modeling extrusion process. The final parts were analyzed by X-ray, tomography......, and magnetic resonance imaging allowing a resolved orientation of the fibers and distribution within the part. The research contributes to the understanding of the fiber orientation and fiber reinforcement of fused deposition modeling parts in additive manufacturing....

  2. Space-Time Hybrid Model for Short-Time Travel Speed Prediction

    Directory of Open Access Journals (Sweden)

    Qi Fan

    2018-01-01

    Full Text Available Short-time traffic speed forecasting is a significant issue for developing Intelligent Transportation Systems applications, and accurate speed forecasting results are necessary inputs for Intelligent Traffic Security Information System (ITSIS and advanced traffic management systems (ATMS. This paper presents a hybrid model for travel speed based on temporal and spatial characteristics analysis and data fusion. This proposed methodology predicts speed by dividing the data into three parts: a periodic trend estimated by Fourier series, a residual part modeled by the ARIMA model, and the possible events affected by upstream or downstream traffic conditions. The aim of this study is to improve the accuracy of the prediction by modeling time and space variation of speed, and the forecast results could simultaneously reflect the periodic variation of traffic speed and emergencies. This information could provide decision-makers with a basis for developing traffic management measures. To achieve the research objective, one year of speed data was collected in Twin Cities Metro, Minnesota. The experimental results demonstrate that the proposed method can be used to explore the periodic characteristics of speed data and show abilities in increasing the accuracy of travel speed prediction.

  3. Simple cosmological model with inflation and late times acceleration

    Science.gov (United States)

    Szydłowski, Marek; Stachowski, Aleksander

    2018-03-01

    In the framework of polynomial Palatini cosmology, we investigate a simple cosmological homogeneous and isotropic model with matter in the Einstein frame. We show that in this model during cosmic evolution, early inflation appears and the accelerating phase of the expansion for the late times. In this frame we obtain the Friedmann equation with matter and dark energy in the form of a scalar field with a potential whose form is determined in a covariant way by the Ricci scalar of the FRW metric. The energy density of matter and dark energy are also parameterized through the Ricci scalar. Early inflation is obtained only for an infinitesimally small fraction of energy density of matter. Between the matter and dark energy, there exists an interaction because the dark energy is decaying. For the characterization of inflation we calculate the slow roll parameters and the constant roll parameter in terms of the Ricci scalar. We have found a characteristic behavior of the time dependence of density of dark energy on the cosmic time following the logistic-like curve which interpolates two almost constant value phases. From the required numbers of N-folds we have found a bound on the model parameter.

  4. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  5. Assimilation of LAI time-series in crop production models

    Science.gov (United States)

    Kooistra, Lammert; Rijk, Bert; Nannes, Louis

    2014-05-01

    Agriculture is worldwide a large consumer of freshwater, nutrients and land. Spatial explicit agricultural management activities (e.g., fertilization, irrigation) could significantly improve efficiency in resource use. In previous studies and operational applications, remote sensing has shown to be a powerful method for spatio-temporal monitoring of actual crop status. As a next step, yield forecasting by assimilating remote sensing based plant variables in crop production models would improve agricultural decision support both at the farm and field level. In this study we investigated the potential of remote sensing based Leaf Area Index (LAI) time-series assimilated in the crop production model LINTUL to improve yield forecasting at field level. The effect of assimilation method and amount of assimilated observations was evaluated. The LINTUL-3 crop production model was calibrated and validated for a potato crop on two experimental fields in the south of the Netherlands. A range of data sources (e.g., in-situ soil moisture and weather sensors, destructive crop measurements) was used for calibration of the model for the experimental field in 2010. LAI from cropscan field radiometer measurements and actual LAI measured with the LAI-2000 instrument were used as input for the LAI time-series. The LAI time-series were assimilated in the LINTUL model and validated for a second experimental field on which potatoes were grown in 2011. Yield in 2011 was simulated with an R2 of 0.82 when compared with field measured yield. Furthermore, we analysed the potential of assimilation of LAI into the LINTUL-3 model through the 'updating' assimilation technique. The deviation between measured and simulated yield decreased from 9371 kg/ha to 8729 kg/ha when assimilating weekly LAI measurements in the LINTUL model over the season of 2011. LINTUL-3 furthermore shows the main growth reducing factors, which are useful for farm decision support. The combination of crop models and sensor

  6. Real Time Updating in Distributed Urban Rainfall Runoff Modelling

    DEFF Research Database (Denmark)

    Borup, Morten; Madsen, Henrik

    are equipped with basins and automated structures that allow for a large degree of control of the systems, but in order to do this optimally it is required to know what is happening throughout the system. For this task models are needed, due to the large scale and complex nature of the systems. The physically...... that are being updated from system measurements was studied. The results showed that the fact alone that it takes time for rainfall data to travel the distance between gauges and catchments has such a big negative effect on the forecast skill of updated models, that it can justify the choice of even very...... when it was used to update the water level in multiple upstream basins. This method is, however, not capable of utilising the spatial correlations in the errors to correct larger parts of the models. To accommodate this a method was developed for correcting the slow changing inflows to urban drainage...

  7. A Continuous-Time Model for Valuing Foreign Exchange Options

    Directory of Open Access Journals (Sweden)

    James J. Kung

    2013-01-01

    Full Text Available This paper makes use of stochastic calculus to develop a continuous-time model for valuing European options on foreign exchange (FX when both domestic and foreign spot rates follow a generalized Wiener process. Using the dollar/euro exchange rate as input for parameter estimation and employing our FX option model as a yardstick, we find that the traditional Garman-Kohlhagen FX option model, which assumes constant spot rates, values incorrectly calls and puts for different values of the ratio of exchange rate to exercise price. Specifically, it undervalues calls when the ratio is between 0.70 and 1.08, and it overvalues calls when the ratio is between 1.18 and 1.30, whereas it overvalues puts when the ratio is between 0.70 and 0.82, and it undervalues puts when the ratio is between 0.86 and 1.30.

  8. IEA-ETSAP TIMES models in Denmark. Preliminary edition

    Energy Technology Data Exchange (ETDEWEB)

    Grohnheit, P.E.

    2011-03-15

    This report presents the project 'Danish participation in IEAETSAP, Annex XI, 2008-2010', which continued the Danish participation in ETSAP under Annex XI 'JOint STudies for New And Mitigated Energy Systems (JOSTNAMES): Climate friendly, Secure and Productive Energy Systems'. The main activity has been semi-annual workshops focusing on presentations of model analyses and use of the ETSAP tools (the MARKAL/TIMES family of models). Contributions to these workshops have been based on various collaborative projects within the EU research programmes and the Danish Centre for Environment, Energy and Health (CEEH). In addition, the DTU Climate Centre at Risoe, which was founded in the autumn of 2008, has taken part in the ETSAP workshops, and used the ETSAP model tools for projects, papers, and presentations, as well as for a Ph.D. project. (Author)

  9. Model-checking dense-time Duration Calculus

    DEFF Research Database (Denmark)

    Fränzle, Martin

    2004-01-01

    Since the seminal work of Zhou Chaochen, M. R. Hansen, and P. Sestoft on decidability of dense-time Duration Calculus [Zhou, Hansen, Sestoft, 1993] it is well-known that decidable fragments of Duration Calculus can only be obtained through withdrawal of much of the interesting vocabulary...... of this logic. While this was formerly taken as an indication that key-press verification of implementations with respect to elaborate Duration Calculus specifications were also impossible, we show that the model property is well decidable for realistic designs which feature natural constraints...... suitably sparser model classes we obtain model-checking procedures for rich subsets of Duration Calculus. Together with undecidability results also obtained, this sheds light upon the exact borderline between decidability and undecidability of Duration Calculi and related logics....

  10. Hesperidin produces cardioprotective activity via PPAR-γ pathway in ischemic heart disease model in diabetic rats.

    Directory of Open Access Journals (Sweden)

    Yogeeta O Agrawal

    Full Text Available The present study investigated the effect of hesperidin, a natural flavonoid, in cardiac ischemia and reperfusion (I/R injury in diabetic rats. Male Wistar rats with diabetes were divided into five groups and were orally administered saline once daily (IR-sham and IR-control, Hesperidin (100 mg/kg/day; IR-Hesperidin, GW9962 (PPAR-γ receptor antagonist, or combination of both for 14 days. On the 15th day, in the IR-control and IR-treatment groups, rats were subjected to left anterior descending (LAD coronary artery occlusion for 45 minutes followed by a one-hour reperfusion. Haemodynamic parameters were recorded and rats were sacrificed; hearts were isolated for biochemical, histopathological, ultrastructural and immunohistochemistry. In the IR-control group, significant ventricular dysfunctions were observed along with enhanced expression of pro-apoptotic protein Bax. A decline in cardiac injury markers lactate dehydrogenase activity, CK-MB and increased content of thiobarbituric acid reactive substances, a marker of lipid peroxidation, and TNF-α were observed. Hesperidin pretreatment significantly improved mean arterial pressure, reduced left ventricular end-diastolic pressure, and improved both inotropic and lusitropic function of the heart (+LVdP/dt and -LVdP/dt as compared to IR-control. Furthermore, hesperidin treatment significantly decreased the level of thiobarbituric acid reactive substances and reversed the activity of lactate dehydrogenase towards normal value. Hesperidin showed anti-apoptotic effects by upregulating Bcl-2 protein and decreasing Bax protein expression. Additionally, histopathological and ultrastructural studies reconfirmed the protective action of hesperidin. On the other hand, GW9662, selective PPAR-γ receptor antagonist, produced opposite effects and attenuated the hesperidin induced improvements. The study for the first time evidence the involvement of PPAR-γ pathway in the cardioprotective activity of

  11. Model based Computerized Ionospheric Tomography in space and time

    Science.gov (United States)

    Tuna, Hakan; Arikan, Orhan; Arikan, Feza

    2018-04-01

    Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.

  12. Modelling and optimisation of fs laser-produced K (alpha) sources

    Czech Academy of Sciences Publication Activity Database

    Gibbon, P.; Mašek, Martin; Teubner, U.; Lu, W.; Nicoul, M.; Shymanovich, U.; Tarasevitch, A.; Zhou, P.; Sokolowski-Tinten, K.; von der Linde, D.

    2009-01-01

    Roč. 96, č. 1 (2009), 23-31 ISSN 0947-8396 R&D Projects: GA MŠk(CZ) LC528 Institutional research plan: CEZ:AV0Z10100523 Keywords : fs laser-plasma interaction * K (alpha) sources * 3D numerical modelling Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.595, year: 2009

  13. A model of disordered zone formation in Cu3Au under cascade-producing irradiation

    International Nuclear Information System (INIS)

    Kapinos, V.G.; Bacon, D.J.

    1995-01-01

    A model to describe the disordering of ordered Cu 3 Au under irradiation is proposed. For the thermal spike phase of a displacement cascade, the processes of heat evolution and conduction in the cascade region are modelled by solving the thermal conduction equation by a discretization method for a medium that can melt and solidify under appropriate conditions. The model considers disordering to result from cascade core melting, with the final disordered zone corresponding to the largest molten zone achieved. The initial conditions for this treatment are obtained by simulation of cascades by the MARLOWE code. The contrast of disordered zones imaged in a superlattice dark-field reflection and projected on the plane parallel to the surface of a thin foil was calculated. The average size of images from hundreds of cascades created by incident Cu + ions were calculated for different ion energies and compared with experimental transmission electron microscopy data. The model is in reasonable quantitative agreement with the experimentally observed trends. (author)

  14. Modelling Limit Order Execution Times from Market Data

    Science.gov (United States)

    Kim, Adlar; Farmer, Doyne; Lo, Andrew

    2007-03-01

    Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.

  15. Programming Models for Concurrency and Real-Time

    Science.gov (United States)

    Vitek, Jan

    Modern real-time applications are increasingly large, complex and concurrent systems which must meet stringent performance and predictability requirements. Programming those systems require fundamental advances in programming languages and runtime systems. This talk presents our work on Flexotasks, a programming model for concurrent, real-time systems inspired by stream-processing and concurrent active objects. Some of the key innovations in Flexotasks are that it support both real-time garbage collection and region-based memory with an ownership type system for static safety. Communication between tasks is performed by channels with a linear type discipline to avoid copying messages, and by a non-blocking transactional memory facility. We have evaluated our model empirically within two distinct implementations, one based on Purdue’s Ovm research virtual machine framework and the other on Websphere, IBM’s production real-time virtual machine. We have written a number of small programs, as well as a 30 KLOC avionics collision detector application. We show that Flexotasks are capable of executing periodic threads at 10 KHz with a standard deviation of 1.2us and have performance competitive with hand coded C programs.

  16. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...

  17. Model Penjadwalan Batch Multi Item dengan Dependent Processing Time

    Directory of Open Access Journals (Sweden)

    Sukoyo Sukoyo

    2010-01-01

    Full Text Available This paper investigates a development of single machine batch scheduling for multi items with dependent processing time. The batch scheduling problem is to determine simultaneously number of batch (N, which item and its size allocated for each batch, and processing sequences of resulting batches. We use total actual flow time as the objective of schedule performance. The multi item batch scheduling problem could be formulated into a biner-integer nonlinear programming model because the number of batch should be in integer value, the allocation of items to resulting batch need binary values, and also there are some non-linearity on objective function and constraint due to the dependent processing time. By applying relaxation on the decision variable of number of batch (N as parameter, a heuristic procedure could be applied to find solution of the single machine batch scheduling problem for multi items.

  18. Modeling of CO and NOx produced by vehicles in Mashhad, 2012

    Directory of Open Access Journals (Sweden)

    Ali Asghar Najafpoor

    2014-11-01

    Full Text Available Background: In large cities, the share of vehicles in air pollutants emissions is nearly 70% that is mainly due to use of fossil fuels. Environmental simulation has many advantages such as accuracy and speed of modeling. Present study was conducted to create a model of air pollution [Carbon Monoxide (CO and Nitrogen Oxides (NOx] from vehicles in fifty following years, in Mashhad. Methods: According to the collected data from license plate, traffic and transportation organizations, modeling of CO and NOx was performed by STELLA software. Hence, five strategies, including reduction in the number of imported vehicles and the proportion of distance traveled by vehicles, increase in the number of junked vehicles, application of Euro 4 standards instead of Euro 3 and a combination of their application, were applied in the model. Results: In the current condition, CO and NOx concentrations are 27,894 and 2,121, and after 50 years they would be 26,227,930 and 2,070,011 ton/year, respectively. Applying the aforementioned strategies, their concentrations were declined approximately (35% and 35%, (50% and 50%, (16% and 16%, (7% and 47% and (75% and 85%, correspondingly. Conclusion: Developed model showed that if the present condition remains stable, air quality will be more and more undesirable in the 50 following years. However, application of the second method, reduction of the distance traveled, was the most effective strategy in reducing the amounts of ones, so it will be better that this strategy is considered in the administrative policies. Nevertheless, as far as possible all of them ought to be taken advantage of.

  19. Milk fermented with a 15-lipoxygenase-1-producing Lactococcus lactis alleviates symptoms of colitis in a murine model.

    Science.gov (United States)

    Saraiva, Tessalia D L; Morais, Katia; Pereira, Vanessa B; de Azevedo, Marcela; Rocha, Clarissa S; Prosperi, Camila C; Gomes-Santos, Ana C; Bermudez-Humaran, Luis; Faria, Ana M C; Blottiere, Herve M; Langella, Philippe; Miyoshi, Anderson; de LeBlanc, Alejandra de Moreno; LeBlanc, Jean G; Azevedo, Vasco

    2015-01-01

    Inflammatory bowel diseases (IBD), such as Crohn's disease and ulcerative colitis, is characterized by extensive inflammation due to dysregulation of the innate and adaptive immune system whose exact etiology is not yet completely understood. Currently there is no cure for IBD, thus the search for new molecules capable of controlling IBD and their delivery to the site of inflammation are the goal of many researchers. The aim of this work was to evaluate the anti-inflammatory effect of the administration of milks fermented by a Lactococcus (L.) lactis strain producing 15-lipoxygenase-1 (15-LOX-1) using a trinitrobenzenesulfonic acid-induced IBD mouse model. The results obtained demonstrated that 15-LOX-1 producing L. lactis was effective in the prevention of the intestinal damage associated to inflammatory bowel disease in a murine model. The work also confirmed previous studies showing that fermented milk is an effective form of administration of recombinant lactic acid bacteria expressing beneficial molecules.

  20. Effect of milling time and CNT concentration on hardness of CNT/Al{sub 2024} composites produced by mechanical alloying

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Bustamante, R. [Centro de Investigacion en Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnologia, Miguel de Cervantes No.120, C.P. 31109, Chihuahua, Chih. (Mexico); Perez-Bustamante, F. [Universidad Autonoma de Chihuahua (UACH), Facultad de Ingenieria, Circuito No. 1 Nuevo Campus Universitario, C.P. 31125, Chihuahua, Chih. (Mexico); Estrada-Guel, I. [Centro de Investigacion en Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnologia, Miguel de Cervantes No.120, C.P. 31109, Chihuahua, Chih. (Mexico); Licea-Jimenez, L. [Centro de Investigacion en Materiales Avanzados S.C. (CIMAV), Unidad Mty, Autopista Monterrey-Aeropuerto Km 10, A. P. 43, C.P. 66600, Apodaca, N.L. (Mexico); Miki-Yoshida, M. [Centro de Investigacion en Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnologia, Miguel de Cervantes No.120, C.P. 31109, Chihuahua, Chih. (Mexico); Martinez-Sanchez, R., E-mail: roberto.martiez@cimav.edu.mx [Centro de Investigacion en Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnologia, Miguel de Cervantes No.120, C.P. 31109, Chihuahua, Chih. (Mexico)

    2013-01-15

    Carbon nanotube/2024 aluminum alloy (CNT/Al{sub 2024}) composites were fabricated with a combination of mechanical alloying (MA) and powder metallurgy routes. Composites were microstructurally and mechanically evaluated at sintering condition. A homogeneous dispersion of CNTs in the Al matrix was observed by a field emission scanning electron microscopy. High-resolution transmission electron microscopy confirmed not only the presence of well dispersed CNTs but also needle-like shape aluminum carbide (Al{sub 4}C{sub 3}) crystals in the Al matrix. The formation of Al{sub 4}C{sub 3} was suggested as the interaction between the outer shells of CNTs and the Al matrix during MA process in which crystallization took place after the sintering process. The mechanical behavior of composites was evaluated by Vickers microhardness measurements indicating a significant improvement in hardness as function of the CNT content. This improvement was associated to a homogeneous dispersion of CNTs and the presence of Al{sub 4}C{sub 3} in the aluminum alloy matrix. - Highlights: Black-Right-Pointing-Pointer The 2024 aluminum alloy was reinforced by CNTs by mechanical alloying process. Black-Right-Pointing-Pointer Composites were microstructural and mechanically evaluated after sintering condition. Black-Right-Pointing-Pointer The greater the CNT concentration, the greater the hardness of the composites. Black-Right-Pointing-Pointer Higher hardness in composites is achieved at 20 h of milling. Black-Right-Pointing-Pointer The formation of Al{sub 4}C{sub 3} does not present a direct relationship with the milling time.

  1. Orbital component extraction by time-variant sinusoidal modeling.

    Science.gov (United States)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-04-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on the (Fast) Fourier Transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic makes it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. Here, we circumvent this drawback by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach has been proven useful to characterize audio signals (music and speech), which are non-stationary in nature (Zivanovic and Schoukens, 2010, 2012). Paleoclimate proxy signals and audio signals have in nature similar dynamics; the only difference is the frequency relationship between the different components. A harmonic frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, the latter difference is irrelevant for the problem at hand. Using a sliding window approach, the model captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretation, whereas the latter are estimated by means of linear least-squares. As an output, the model provides the orbital component waveform, either in the depth or time domain. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns can be used to reconstruct changes in accumulation rate, whereas amplitude modulation can be used to reconstruct e.g. eccentricity-modulated precession. The time-variant sinusoidal model

  2. Sites That Can Produce Left-Handed Amino Acids in the Supernova Neutrino Amino Acid Processing Model

    OpenAIRE

    Boyd, Richard N.; Famiano, Michael A.; Onaka, Takashi; Kajino, Toshitaka

    2018-01-01

    The Supernova Neutrino Amino Acid Processing model, which uses electron anti-neutrinos and the magnetic field from a source object such as a supernova to selectively destroy one amino acid chirality, is studied for possible sites that would produce meteoroids having partially left-handed amino acids. Several sites appear to provide the requisite magnetic field intensities and electron anti-neutrino fluxes. These results have obvious implications for the origin of life on Earth.

  3. Sites that Can Produce Left-handed Amino Acids in the Supernova Neutrino Amino Acid Processing Model

    Science.gov (United States)

    Boyd, Richard N.; Famiano, Michael A.; Onaka, Takashi; Kajino, Toshitaka

    2018-03-01

    The Supernova Neutrino Amino Acid Processing model, which uses electron anti-neutrinos and the magnetic field from a source object such as a supernova to selectively destroy one amino acid chirality, is studied for possible sites that would produce meteoroids with partially left-handed amino acids. Several sites appear to provide the requisite magnetic field intensities and electron anti-neutrino fluxes. These results have obvious implications for the origin of life on Earth.

  4. Mediation Analysis with Survival Outcomes: Accelerated Failure Time Versus Proportional Hazards Models

    Directory of Open Access Journals (Sweden)

    Lois A Gelfand

    2016-03-01

    Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  5. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    Science.gov (United States)

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  6. Novel mouse hemostasis model for real-time determination of bleeding time and hemostatic plug composition

    Science.gov (United States)

    GETZ, T. M.; PIATT, R.; PETRICH, B. G.; MONROE, D.; MACKMAN, N.; BERGMEIER, W.

    2015-01-01

    Summary Introduction Hemostasis is a rapid response by the body to stop bleeding at sites of vessel injury. Both platelets and fibrin are important for the formation of a hemostatic plug. Mice have been used to uncover the molecular mechanisms that regulate the activation of platelets and coagulation under physiologic conditions. However, measurements of hemostasis in mice are quite variable, and current methods do not quantify platelet adhesion or fibrin formation at the site of injury. Methods We describe a novel hemostasis model that uses intravital fluorescence microscopy to quantify platelet adhesion, fibrin formation and time to hemostatic plug formation in real time. Repeated vessel injuries of ~ 50–100 μm in diameter were induced with laser ablation technology in the saphenous vein of mice. Results Hemostasis in this model was strongly impaired in mice deficient in glycoprotein Ibα or talin-1, which are important regulators of platelet adhesiveness. In contrast, the time to hemostatic plug formation was only minimally affected in mice deficient in the extrinsic tissue factor (TFlow) or the intrinsic factor IX coagulation pathways, even though platelet adhesion was significantly reduced. A partial reduction in platelet adhesiveness obtained with clopidogrel led to instability within the hemostatic plug, especially when combined with impaired coagulation in TFlow mice. Conclusions In summary, we present a novel, highly sensitive method to quantify hemostatic plug formation in mice. On the basis of its sensitivity to platelet adhesion defects and its real-time imaging capability, we propose this model as an ideal tool with which to study the efficacy and safety of antiplatelet agents. PMID:25442192

  7. Modeling OPEC behavior: theories of risk aversion for oil producer decisions

    International Nuclear Information System (INIS)

    Reynolds, D.B.

    1999-01-01

    Theories of OPEC such as price leadership, cartel, or game theoretic models suggest an incentive for OPEC members to expand their production capacity well above current levels in order to maximize revenues. Yet individual OPEC members consistently explore for and develop oil fields at a level well below their potential. The cause of low oil exploration and development efforts among OPEC members, and even some non-OPEC members, may have to do with risk aversion. This paper describes an alternative theory for OPEC behavior based on risk aversion using a two piece non-Neumann-Morgenstern utility function similar to Fishburn and Koehenberger (1979, Decision Science 10, 503-518), and Friedman and Savage (1948, Journal of political Economy 56). The model shows possible low oil production behavior. (author)

  8. Search for the Standard Model Higgs boson produced in the decay ...

    Indian Academy of Sciences (India)

    2012-10-06

    Oct 6, 2012 ... 2Istituto Nazionale di Fisica Nucleare-Sezione di Bari, Bari, Italy 70125. E-mail: simranjit.chhibra@ba.infn.it. Abstract. Search for the Standard Model Higgs boson in the decay mode H → Z Z → 2l2τ, where l = μ, e, is presented based on CMS data corresponding to an integrated luminosity of. 1.1 fb. −1 at. √.

  9. Notified Access: Extending Remote Memory Access Programming Models for Producer-Consumer Synchronization

    KAUST Repository

    Belli, Roberto

    2015-05-01

    Remote Memory Access (RMA) programming enables direct access to low-level hardware features to achieve high performance for distributed-memory programs. However, the design of RMA programming schemes focuses on the memory access and less on the synchronization. For example, in contemporary RMA programming systems, the widely used producer-consumer pattern can only be implemented inefficiently, incurring in an overhead of an additional round-trip message. We propose Notified Access, a scheme where the target process of an access can receive a completion notification. This scheme enables direct and efficient synchronization with a minimum number of messages. We implement our scheme in an open source MPI-3 RMA library and demonstrate lower overheads (two cache misses) than other point-to-point synchronization mechanisms for each notification. We also evaluate our implementation on three real-world benchmarks, a stencil computation, a tree computation, and a Colicky factorization implemented with tasks. Our scheme always performs better than traditional message passing and other existing RMA synchronization schemes, providing up to 50% speedup on small messages. Our analysis shows that Notified Access is a valuable primitive for any RMA system. Furthermore, we provide guidance for the design of low-level network interfaces to support Notified Access efficiently.

  10. Time evolution in deparametrized models of loop quantum gravity

    Science.gov (United States)

    Assanioussi, Mehdi; Lewandowski, Jerzy; Mäkinen, Ilkka

    2017-07-01

    An important aspect in understanding the dynamics in the context of deparametrized models of loop quantum gravity (LQG) is to obtain a sufficient control on the quantum evolution generated by a given Hamiltonian operator. More specifically, we need to be able to compute the evolution of relevant physical states and observables with a relatively good precision. In this article, we introduce an approximation method to deal with the physical Hamiltonian operators in deparametrized LQG models, and we apply it to models in which a free Klein-Gordon scalar field or a nonrotational dust field is taken as the physical time variable. This method is based on using standard time-independent perturbation theory of quantum mechanics to define a perturbative expansion of the Hamiltonian operator, the small perturbation parameter being determined by the Barbero-Immirzi parameter β . This method allows us to define an approximate spectral decomposition of the Hamiltonian operators and hence to compute the evolution over a certain time interval. As a specific example, we analyze the evolution of expectation values of the volume and curvature operators starting with certain physical initial states, using both the perturbative method and a straightforward expansion of the expectation value in powers of the time variable. This work represents a first step toward achieving the goal of understanding and controlling the new dynamics developed in Alesci et al. [Phys. Rev. D 91, 124067 (2015), 10.1103/PhysRevD.91.124067] and Assanioussi et al. [Phys. Rev. D 92, 044042 (2015), 10.1103/PhysRevD.92.044042].

  11. Pattern recognition analysis and classification modeling of selenium-producing areas

    Science.gov (United States)

    Naftz, D.L.

    1996-01-01

    Established chemometric and geochemical techniques were applied to water quality data from 23 National Irrigation Water Quality Program (NIWQP) study areas in the Western United States. These techniques were applied to the NIWQP data set to identify common geochemical processes responsible for mobilization of selenium and to develop a classification model that uses major-ion concentrations to identify areas that contain elevated selenium concentrations in water that could pose a hazard to water fowl. Pattern recognition modeling of the simple-salt data computed with the SNORM geochemical program indicate three principal components that explain 95% of the total variance. A three-dimensional plot of PC 1, 2 and 3 scores shows three distinct clusters that correspond to distinct hydrochemical facies denoted as facies 1, 2 and 3. Facies 1 samples are distinguished by water samples without the CaCO3 simple salt and elevated concentrations of NaCl, CaSO4, MgSO4 and Na2SO4 simple salts relative to water samples in facies 2 and 3. Water samples in facies 2 are distinguished from facies 1 by the absence of the MgSO4 simple salt and the presence of the CaCO3 simple salt. Water samples in facies 3 are similar to samples in facies 2, with the absence of both MgSO4 and CaSO4 simple salts. Water samples in facies 1 have the largest selenium concentration (10 ??gl-1), compared to a median concentration of 2.0 ??gl-1 and less than 1.0 ??gl-1 for samples in facies 2 and 3. A classification model using the soft independent modeling by class analogy (SIMCA) algorithm was constructed with data from the NIWQP study areas. The classification model was successful in identifying water samples with a selenium concentration that is hazardous to some species of water-fowl from a test data set comprised of 2,060 water samples from throughout Utah and Wyoming. Application of chemometric and geochemical techniques during data synthesis analysis of multivariate environmental databases from other

  12. Time-dependent deformation source model of Kilauea volcano obtained via InSAR time series and inversion modeling

    Science.gov (United States)

    Zhai, G.; Shirzaei, M.

    2014-12-01

    The Kilauea volcano, Hawaii Island, is one of the most active volcanoes worldwide. Its complex system including magma reservoirs and rift zones, provides a unique opportunity to investigate the dynamics of magma transport and supply. The relatively shallow magma reservoir beneath the caldera stores magma prior to eruption at the caldera or migration to the rift zones. Additionally, the temporally variable pressure in the magma reservoir causes changes in the stress field, driving dike propagation and occasional intrusions at the eastern rift zone. Thus constraining the time-dependent evolution of the magma reservoir plays an important role in understanding magma processes such as supply, storage, transport and eruption. The recent development of space-based monitoring technology, InSAR (Interferometric synthetic aperture radar), allows the detection of subtle deformation of the surface at high spatial resolution and accuracy. In order to understand the dynamics of the magma chamber at Kilauea summit area and the associated stress field, we explored SAR data sets acquired in two overlapping tracks of Envisat SAR data during period 2003-2010. The combined InSAR time series includes 100 samples measuring summit deformation at unprecedented spatiotemporal resolutions. To investigate the source of the summit deformation field, we propose a novel time-dependent inverse modelling approach to constrain the dynamics of the reservoir volume change within the summit magma reservoir in three dimensions. In conjunction with seismic and gas data sets, the obtained time-dependent model could resolve the temporally variable relation between shallow and deep reservoirs, as well as their connection to the rift zone via stress changes. The data and model improve the understanding of the Kilauea plumbing system, physics of eruptions, mechanics of rift intrusions, and enhance eruption forecast models.

  13. Space-for-time substitution works in everglades ecological forecasting models.

    Directory of Open Access Journals (Sweden)

    Amanda I Banet

    Full Text Available Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-for-time substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.

  14. Predictive Model of Surgical Time for Revision Total Hip Arthroplasty.

    Science.gov (United States)

    Wu, Albert; Weaver, Michael J; Heng, Marilyn M; Urman, Richard D

    2017-07-01

    Maximizing operating room utilization in orthopedic and other surgeries relies on accurate estimates of surgical control time (SCT). A variety of case and patient-specific variables can influence the duration of surgical time during revision total hip arthroplasty (THA). We hypothesized that these variables are better predictors of actual SCT (aSCT) than a surgeon's own prediction (pSCT). All revision THAs from October 2008 to September 2014 from one institution were accessed. Variables for each case included aSCT, pSCT, patient age, gender, body mass index, American Society of Anesthesiologists Physical Status class, active infection, periprosthetic fracture, bone loss, heterotopic ossification, and implantation/explantation of a well-fixed acetabular/femoral component. These were incorporated in a stepwise fashion into a multivariate regression model for aSCT with a significant cutoff of 0.15. This was compared to a univariate regression model of aSCT that only used pSCT. In total, 516 revision THAs were analyzed. After stepwise selection, patient age and American Society of Anesthesiologists Physical Status were excluded from the model. The most significant increase in aSCT was seen with implantation of a new femoral component (24.0 min), followed by explantation of a well-fixed femoral component (18.7 min) and significant bone loss (15.0 min). Overall, the multivariate model had an improved r 2 of 0.49, compared to 0.16 from only using pSCT. A multivariate regression model can assist surgeons in more accurately predicting the duration of revision THAs. The strongest predictors of increased aSCT are explantation of a well-fixed femoral component, placement of an entirely new femoral component, and presence of significant bone loss. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  16. A Model for Learning Over Time: The Big Picture

    Science.gov (United States)

    Amato, Herbert K.; Konin, Jeff G.; Brader, Holly

    2002-01-01

    Objective: To present a method of describing the concept of “learning over time” with respect to its implementation into an athletic training education program curriculum. Background: The formal process of learning over time has recently been introduced as a required way for athletic training educational competencies and clinical proficiencies to be delivered and mastered. Learning over time incorporates the documented cognitive, psychomotor, and affective skills associated with the acquisition, progression, and reflection of information. This method of academic preparation represents a move away from a quantitative-based learning module toward a proficiency-based mastery of learning. Little research or documentation can be found demonstrating either the specificity of this concept or suggestions for its application. Description: We present a model for learning over time that encompasses multiple indicators for assessment in a successive format. Based on a continuum approach, cognitive, psychomotor, and affective characteristics are assessed at different levels in classroom and clinical environments. Clinical proficiencies are a common set of entry-level skills that need to be integrated into the athletic training educational domains. Objective documentation is presented, including the skill breakdown of a task and a matrix to identify a timeline of competency and proficiency delivery. Clinical Advantages: The advantages of learning over time pertain to the integration of cognitive knowledge into clinical skill acquisition. Given the fact that learning over time has been implemented as a required concept for athletic training education programs, this model may serve to assist those program faculty who have not yet developed, or are in the process of developing, a method of administering this approach to learning. PMID:12937551

  17. Analysis of real-time reservoir monitoring : reservoirs, strategies, & modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Mani, Seethambal S.; van Bloemen Waanders, Bart Gustaaf; Cooper, Scott Patrick; Jakaboski, Blake Elaine; Normann, Randy Allen; Jennings, Jim (University of Texas at Austin, Austin, TX); Gilbert, Bob (University of Texas at Austin, Austin, TX); Lake, Larry W. (University of Texas at Austin, Austin, TX); Weiss, Chester Joseph; Lorenz, John Clay; Elbring, Gregory Jay; Wheeler, Mary Fanett (University of Texas at Austin, Austin, TX); Thomas, Sunil G. (University of Texas at Austin, Austin, TX); Rightley, Michael J.; Rodriguez, Adolfo (University of Texas at Austin, Austin, TX); Klie, Hector (University of Texas at Austin, Austin, TX); Banchs, Rafael (University of Texas at Austin, Austin, TX); Nunez, Emilio J. (University of Texas at Austin, Austin, TX); Jablonowski, Chris (University of Texas at Austin, Austin, TX)

    2006-11-01

    The project objective was to detail better ways to assess and exploit intelligent oil and gas field information through improved modeling, sensor technology, and process control to increase ultimate recovery of domestic hydrocarbons. To meet this objective we investigated the use of permanent downhole sensors systems (Smart Wells) whose data is fed real-time into computational reservoir models that are integrated with optimized production control systems. The project utilized a three-pronged approach (1) a value of information analysis to address the economic advantages, (2) reservoir simulation modeling and control optimization to prove the capability, and (3) evaluation of new generation sensor packaging to survive the borehole environment for long periods of time. The Value of Information (VOI) decision tree method was developed and used to assess the economic advantage of using the proposed technology; the VOI demonstrated the increased subsurface resolution through additional sensor data. Our findings show that the VOI studies are a practical means of ascertaining the value associated with a technology, in this case application of sensors to production. The procedure acknowledges the uncertainty in predictions but nevertheless assigns monetary value to the predictions. The best aspect of the procedure is that it builds consensus within interdisciplinary teams The reservoir simulation and modeling aspect of the project was developed to show the capability of exploiting sensor information both for reservoir characterization and to optimize control of the production system. Our findings indicate history matching is improved as more information is added to the objective function, clearly indicating that sensor information can help in reducing the uncertainty associated with reservoir characterization. Additional findings and approaches used are described in detail within the report. The next generation sensors aspect of the project evaluated sensors and packaging

  18. A validation study of reconstructed rapid prototyping models produced by two technologies.

    Science.gov (United States)

    Dietrich, Christian Andreas; Ender, Andreas; Baumgartner, Stefan; Mehl, Albert

    2017-09-01

    To determine the accuracy (trueness and precision) of two different rapid prototyping (RP) techniques for the physical reproduction of three-dimensional (3D) digital orthodontic study casts, a comparative assessment using two 3D STL files of two different maxillary dentitions (two cases) as a reference was accomplished. Five RP replicas per case were fabricated using both stereolithography (SLA) and the PolyJet system. The 20 reproduced casts were digitized with a highly accurate reference scanner, and surface superimpositions were performed. Precision was measured by superimposing the digitized replicas within each case with themselves. Superimposing the digitized replicas with the corresponding STL reference files assessed trueness. Statistical significance between the two tested RP procedures was evaluated with independent-sample t-tests (P < .05). The SLA and PolyJet replicas showed statistically significant differences for trueness and precision. The precision of both tested RP systems was high, with mean deviations in stereolithographic models of 23 (±6) μm and in PolyJet replicas of 46 (±13) μm. The mean deviation for trueness in stereolithographic replicas was 109 (±4) μm, while in PolyJet replicas, it was 66 (±14) μm. Comparing the STL reference files, the PolyJet replicas showed higher trueness than the SLA models. But the precision measurements favored the SLA technique. The dimensional errors observed in this study were a maximum of 127 μm. In the present study, both types of reproduced digital orthodontic models are suitable for diagnostics and treatment planning.

  19. Research to Develop Process Models for Producing a Dual Property Titanium Alloy Compressor Disk.

    Science.gov (United States)

    1981-10-01

    34SI.- .1. $ CL) 1.3.2 (SF) POC E -ADE!.E I, I, (CO.) 1.1.6 ( WSU ( TEL) VALIDATE 1.3.3 1.3.4 (COR) MATERIAL MODELING PROCESS MODEING INTERFACE "CUELING...pp. 49-62. (5) Patton , N. E. and Mahoney, M. W., "Creep of Titanium-Silicon Alloys", Met. Trans. A., 1976, Vol. 7A, pp. 1685-1694. 15 In conclusion...expressions, the WSU equations are very manageable and replicate the measured flow stress data to a high degree of precision. (6)Hart, E. W., "A

  20. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.