WorldWideScience

Sample records for model time step

  1. The importance of time-stepping errors in ocean models

    Science.gov (United States)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  2. An adaptive time-stepping strategy for solving the phase field crystal model

    International Nuclear Information System (INIS)

    Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua

    2013-01-01

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations

  3. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  4. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  5. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  6. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    Science.gov (United States)

    Hoepfer, Matthias

    co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.

  7. Intake flow and time step analysis in the modeling of a direct injection Diesel engine

    Energy Technology Data Exchange (ETDEWEB)

    Zancanaro Junior, Flavio V.; Vielmo, Horacio A. [Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Mechanical Engineering Dept.], E-mails: zancanaro@mecanica.ufrgs.br, vielmoh@mecanica.ufrgs.br

    2010-07-01

    This paper discusses the effects of the time step on turbulence flow structure in the intake and in-cylinder systems of a Diesel engine during the intake process, under the motored condition. The three-dimensional modeling of a reciprocating engine geometry comprising a bowl-in-piston combustion chamber, intake port of shallow ramp helical type and exhaust port of conventional type. The equations are numerically solved, including a transient analysis, valves and piston movements, for engine speed of 1500 rpm, using a commercial Finite Volumes CFD code. A parallel computation is employed. For the purpose of examining the in-cylinder turbulence characteristics two parameters are observed: the discharge coefficient and swirl ratio. This two parameters quantify the fluid flow characteristics inside cylinder in the intake stroke, therefore, it is very important their study and understanding. Additionally, the evolution of the discharge coefficient and swirl ratio, along crank angle, are correlated and compared, with the objective of clarifying the physical mechanisms. Regarding the turbulence, computations are performed with the Eddy Viscosity Model k-u SST, in its Low-Reynolds approaches, with standard near wall treatment. The system of partial differential equations to be solved consists of the Reynolds-averaged compressible Navier-Stokes equations with the constitutive relations for an ideal gas, and using a segregated solution algorithm. The enthalpy equation is also solved. A moving hexahedral trimmed mesh independence study is presented. In the same way many convergence tests are performed, and a secure criterion established. The results of the pressure fields are shown in relation to vertical plane that passes through the valves. Areas of low pressure can be seen in the valve curtain region, due to strong jet flows. Also, it is possible to note divergences between the time steps, mainly for the smaller time step. (author)

  8. Stepping Stones through Time

    Directory of Open Access Journals (Sweden)

    Emily Lyle

    2012-03-01

    Full Text Available Indo-European mythology is known only through written records but it needs to be understood in terms of the preliterate oral-cultural context in which it was rooted. It is proposed that this world was conceptually organized through a memory-capsule consisting of the current generation and the three before it, and that there was a system of alternate generations with each generation taking a step into the future under the leadership of a white or red king.

  9. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  10. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    Science.gov (United States)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  11. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  12. BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics

    Science.gov (United States)

    Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.

    2010-12-01

    BIOMAP simulates competition between two Plant Functional Types (PFT) at any given point in the conterminous U.S. using a time series of daily temperature (mean, minimum, maximum), precipitation, humidity, light and nutrients, with PFT-specific rooting within a multi-layer soil. The model employs a 2-layer canopy biophysics, Farquhar photosynthesis, the Beer-Lambert Law for light attenuation and a mechanistic soil hydrology. In essence, BIOMAP is a re-built version of the biogeochemistry model, BIOME-BGC, into the form of the MAPSS biogeography model. Specific enhancements are: 1) the 2-layer canopy biophysics of Dolman (1993); 2) the unique MAPSS-based hydrology, which incorporates canopy evaporation, snow dynamics, infiltration and saturated and unsaturated percolation with ‘fast’ flow and base flow and a ‘tunable aquifer’ capacity, a metaphor of D’Arcy’s Law; and, 3) a unique MAPSS-based stomatal conductance algorithm, which simultaneously incorporates vapor pressure and soil water potential constraints, based on physiological information and many other improvements. Over small domains the PFTs can be parameterized as individual species to investigate fundamental vs. potential niche theory; while, at more coarse scales the PFTs can be rendered as more general functional groups. Since all of the model processes are intrinsically leaf to plot scale (physiology to PFT competition), it essentially has no ‘intrinsic’ scale and can be implemented on a grid of any size, taking on the characteristics defined by the homogeneous climate of each grid cell. Currently, the model is implemented on the VEMAP 1/2 degree, daily grid over the conterminous U.S. Although both the thermal and water-limited ecotones are dynamic, following climate variability, the PFT distributions remain fixed. Thus, the model is currently being fitted with a ‘reproduction niche’ to allow full dynamic operation as a Dynamic General Vegetation Model (DGVM). While global simulations

  13. Time step MOTA thermostat simulation

    International Nuclear Information System (INIS)

    Guthrie, G.L.

    1978-09-01

    The report details the logic, program layout, and operating procedures for the time-step MOTA (Materials Open Test Assembly) thermostat simulation program known as GYRD. It will enable prospective users to understand the operation of the program, run it, and interpret the results. The time-step simulation analysis was the approach chosen to determine the maximum value gain that could be used to minimize steady temperature offset without risking undamped thermal oscillations. The advantage of the GYRD program is that it directly shows hunting, ringing phenomenon, and similar events. Programs BITT and CYLB are faster, but do not directly show ringing time

  14. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    Science.gov (United States)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  15. A positive and multi-element conserving time stepping scheme for biogeochemical processes in marine ecosystem models

    Science.gov (United States)

    Radtke, H.; Burchard, H.

    2015-01-01

    In this paper, an unconditionally positive and multi-element conserving time stepping scheme for systems of non-linearly coupled ODE's is presented. These systems of ODE's are used to describe biogeochemical transformation processes in marine ecosystem models. The numerical scheme is a positive-definite modification of the Runge-Kutta method, it can have arbitrarily high order of accuracy and does not require time step adaption. If the scheme is combined with a modified Patankar-Runge-Kutta method from Burchard et al. (2003), it also gets the ability to solve a certain class of stiff numerical problems, but the accuracy is restricted to second-order then. The performance of the new scheme on two test case problems is shown.

  16. Development and validation of a local time stepping-based PaSR solver for combustion and radiation modeling

    DEFF Research Database (Denmark)

    Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad

    2013-01-01

    In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...

  17. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  18. Grief: Difficult Times, Simple Steps.

    Science.gov (United States)

    Waszak, Emily Lane

    This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…

  19. Calibration and Evaluation of Different Estimation Models of Daily Solar Radiation in Seasonally and Annual Time Steps in Shiraz Region

    Directory of Open Access Journals (Sweden)

    Hamid Reza Fooladmand

    2017-06-01

    2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region

  20. The association between choice stepping reaction time and falls in older adults--a path analysis model

    NARCIS (Netherlands)

    Pijnappels, M.A.G.M.; Delbaere, K.; Sturnieks, D.L.; Lord, S.R.

    2010-01-01

    Background: choice stepping reaction time (CSRT) is a functional measure that has been shown to significantly discriminate older fallers from non-fallers. Objective: to investigate how physiological and cognitive factors mediate the association between CSRT performance and multiple falls by use of

  1. 3D elastic wave modeling using modified high‐order time stepping schemes with improved stability conditions

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.; Seif, Roustam

    2009-01-01

    We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.

  2. Monte Carlo steps per spin vs. time in the master equation II: Glauber kinetics for the infinite-range ising model in a static magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Suhk Kun [Chungbuk National University, Chungbuk (Korea, Republic of)

    2006-01-15

    As an extension of our previous work on the relationship between time in Monte Carlo simulation and time in the continuous master equation in the infinit-range Glauber kinetic Ising model in the absence of any magnetic field, we explored the same model in the presence of a static magnetic field. Monte Carlo steps per spin as time in the MC simulations again turns out to be proportional to time in the master equation for the model in relatively larger static magnetic fields at any temperature. At and near the critical point in a relatively smaller magnetic field, the model exhibits a significant finite-size dependence, and the solution to the Suzuki-Kubo differential equation stemming from the master equation needs to be re-scaled to fit the Monte Carlo steps per spin for the system with different numbers of spins.

  3. Symplectic integrators with adaptive time steps

    Science.gov (United States)

    Richardson, A. S.; Finn, J. M.

    2012-01-01

    In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.

  4. An application of the time-step topological model for three-phase transformer no-load current calculation considering hysteresis

    International Nuclear Information System (INIS)

    Carrander, Claes; Mousavi, Seyed Ali; Engdahl, Göran

    2017-01-01

    In many transformer applications, it is necessary to have a core magnetization model that takes into account both magnetic and electrical effects. This becomes particularly important in three-phase transformers, where the zero-sequence impedance is generally high, and therefore affects the magnetization very strongly. In this paper, we demonstrate a time-step topological simulation method that uses a lumped-element approach to accurately model both the electrical and magnetic circuits. The simulation method is independent of the used hysteresis model. In this paper, a hysteresis model based on the first-order reversal-curve has been used. - Highlights: • A lumped-element method for modelling transformers i demonstrated. • The method can include hysteresis and arbitrarily complex geometries. • Simulation results for one power transformer are compared to measurements. • An analytical curve-fitting expression for static hysteresis loops is shown.

  5. Hybrid High-Fidelity Modeling of Radar Scenarios Using Atemporal, Discrete-Event, and Time-Step Simulation

    Science.gov (United States)

    2016-12-01

    10 Figure 1.8 High-efficiency and high-fidelity radar system simulation flowchart . 15 Figure 1.9...Methodology roadmaps: experimental-design flowchart showing hybrid sensor models integrated from three simulation categories, followed by overall...simulation display and output produced by Java Simkit program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Figure 4.5 Hybrid

  6. A Straightforward Convergence Method for ICCG Simulation of Multiloop and Time-Stepping FE Model of Synchronous Generators with Simultaneous AC and Rectified DC Connections

    Directory of Open Access Journals (Sweden)

    Shanming Wang

    2015-01-01

    Full Text Available Now electric machines integrate with power electronics to form inseparable systems in lots of applications for high performance. For such systems, two kinds of nonlinearities, the magnetic nonlinearity of iron core and the circuit nonlinearity caused by power electronics devices, coexist at the same time, which makes simulation time-consuming. In this paper, the multiloop model combined with FE model of AC-DC synchronous generators, as one example of electric machine with power electronics system, is set up. FE method is applied for magnetic nonlinearity and variable-step variable-topology simulation method is applied for circuit nonlinearity. In order to improve the simulation speed, the incomplete Cholesky conjugate gradient (ICCG method is used to solve the state equation. However, when power electronics device switches off, the convergence difficulty occurs. So a straightforward approach to achieve convergence of simulation is proposed. At last, the simulation results are compared with the experiments.

  7. Nucleoside uptake in macrophages from various murine strains: a short-time and a two-step stimulation model

    International Nuclear Information System (INIS)

    Busolo, F.; Conventi, L.; Grigolon, M.; Palu, G.

    1991-01-01

    Kinetics of [3H]-uridine uptake by murine peritoneal macrophages (pM phi) is early altered after exposure to a variety of stimuli. Alterations caused by Candida albicans, lipopolysaccharide (LPS) and recombinant interferon-gamma (rIFN-gamma) were similar in SAVO, C57BL/6, C3H/HeN and C3H/HeJ mice, and were not correlated with an activation process as shown by the amount of tumor necrosis factor-alpha (TNF-alpha) being released. Short-time exposure to all stimuli resulted in an increased nucleoside uptake by SAVO pM phi, suggesting that the tumoricidal function of this cell either depends from the type of stimulus or the time when the specific interaction with the cell receptor is taking place. Experiments with priming and triggering signals confirmed the above findings, indicating that the increase or the decrease of nucleoside uptake into the cell depends essentially on the chemical nature of the priming stimulus. The triggering stimulus, on the other hand, is only able to amplify the primary response

  8. Time to pause before the next step

    International Nuclear Information System (INIS)

    Siemon, R.E.

    1998-01-01

    Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy's development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ''demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...'' which are the important ingredients of ITER's objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here

  9. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  10. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  11. Studies on steps affecting tritium residence time in solid blanket

    International Nuclear Information System (INIS)

    Tanaka, Satoru

    1987-01-01

    For the self sustaining of CTR fuel cycle, the effective tritium recovery from blankets is essential. This means that not only tritium breeding ratio must be larger than 1.0, but also high recovering speed is required for the short residence time of tritium in blankets. Short residence time means that the tritium inventory in blankets is small. In this paper, the tritium residence time and tritium inventory in a solid blanket are modeled by considering the steps constituting tritium release. Some of these tritium migration processes were experimentally evaluated. The tritium migration steps in a solid blanket using sintered breeding materials consist of diffusion in grains, desorption at grain edges, diffusion and permeation through grain boundaries, desorption at particle edges, diffusion and percolation through interconnected pores to purging stream, and convective mass transfer to stream. Corresponding to these steps, diffusive, soluble, adsorbed and trapped tritium inventories and the tritium in gas phase are conceivable. The code named TTT was made for calculating these tritium inventories and the residence time of tritium. An example of the results of calculation is shown. The blanket is REPUTER-1, which is the conceptual design of a commercial reversed field pinch fusion reactor studied at the University of Tokyo. The experimental studies on the migration steps of tritium are reported. (Kako, I.)

  12. Diffeomorphic image registration with automatic time-step adjustment

    DEFF Research Database (Denmark)

    Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst

    2015-01-01

    In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required to r...... accuracy as a fixed time-step scheme however at a much less computational cost....

  13. [Collaborative application of BEPS at different time steps.

    Science.gov (United States)

    Lu, Wei; Fan, Wen Yi; Tian, Tian

    2016-09-01

    BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.

  14. An explicit multi-time-stepping algorithm for aerodynamic flows

    OpenAIRE

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  15. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Science.gov (United States)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  16. Combating cancer one step at a time

    Directory of Open Access Journals (Sweden)

    R.N Sugitha Nadarajah

    2016-10-01

    widespread consequences, not only in a medical sense but also socially and economically,” says Dr. Abdel-Rahman. “We need to put in every effort to combat this fatal disease,” he adds.Tackling the spread of cancer and the increase in the number of cases reported every year is not without its challenges, he asserts. “I see the key challenges as the unequal availability of cancer treatments worldwide, the increasing cost of cancer treatment, and the increased median age of the population in many parts of the world, which carries with it a consequent increase in the risk of certain cancers,” he says. “We need to reassess the current pace and orientation of cancer research because, with time, cancer research is becoming industry-oriented rather than academia-oriented — which, in my view, could be very dangerous to the future of cancer research,” adds Dr. Abdel-Rahman. “Governments need to provide more research funding to improve the outcome of cancer patients,” he explains.His efforts and hard work have led to him receiving a number of distinguished awards, namely the UICC International Cancer Technology Transfer (ICRETT fellowship in 2014 at the Investigational New Drugs Unit in the European Institute of Oncology, Milan, Italy; EACR travel fellowship in 2015 at The Christie NHS Foundation Trust, Manchester, UK; and also several travel grants to Ireland, Switzerland, Belgium, Spain, and many other countries where he attended medical conferences. Dr. Abdel-Rahman is currently engaged in a project to establish a clinical/translational cancer research center at his institute, which seeks to incorporate various cancer-related disciplines in order to produce a real bench-to-bedside practice, hoping that it would “change research that may help shape the future of cancer therapy”.Dr. Abdel-Rahman is also an active founding member of the clinical research unit at his institute and is a representative to the prestigious European Organization for Research and

  17. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  18. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  19. Time step size selection for radiation diffusion calculations

    International Nuclear Information System (INIS)

    Rider, W.J.; Knoll, D.A.

    1999-01-01

    The purpose of this note is to describe a time step control technique as applied to radiation diffusion. Standard practice only provides a heuristic criteria related to the relative change in the dependent variables. The authors propose an alternative based on relatively simple physical principles. This time step control applies to methods of solution that are unconditionally stable and converges nonlinearities within a time step in the governing equations. Commonly, nonlinearities in the governing equations are evaluated using existing (old time) data. The authors refer to this as the semi-implicit (SI) method. When a method converges nonlinearities within a time step, the entire governing equation including all nonlinearities is self-consistently evaluated using advance time data (with appropriate time centering for accuracy)

  20. STEP - Product Model Data Sharing and Exchange

    DEFF Research Database (Denmark)

    Kroszynski, Uri

    1998-01-01

    During the last fifteen years, a very large effort to standardize the product models employed in product design, manufacturing and other life-cycle phases has been undertaken. This effort has the acronym STEP, and resulted in the International Standard ISO-10303 "Industrial Automation Systems...... - Product Data Representation and Exchange", featuring at present some 30 released parts, and growing continuously. Many of the parts are Application Protocols (AP). This article presents an overview of STEP, based upon years of involvement in three ESPRIT projects, which contributed to the development...

  1. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  2. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  3. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  4. Multi-time-step domain coupling method with energy control

    DEFF Research Database (Denmark)

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....

  5. Sharing Steps in the Workplace: Changing Privacy Concerns Over Time

    DEFF Research Database (Denmark)

    Jensen, Nanna Gorm; Shklovski, Irina

    2016-01-01

    study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers...

  6. Integrated Modelling - the next steps (Invited)

    Science.gov (United States)

    Moore, R. V.

    2010-12-01

    Integrated modelling (IM) has made considerable advances over the past decade but it has not yet been taken up as an operational tool in the way that its proponents had hoped. The reasons why will be discussed in Session U17. This talk will propose topics for a research and development programme and suggest an institutional structure which, together, could overcome the present obstacles. Their combined aim would be first to make IM into an operational tool useable by competent public authorities and commercial companies and, in time, to see it evolve into the modelling equivalent of Google Maps, something accessible and useable by anyone with a PC or an iphone and an internet connection. In a recent study, a number of government agencies, water authorities and utilities applied integrated modelling to operational problems. While the project demonstrated that IM could be used in an operational setting and had benefit, it also highlighted the advances that would be required for its widespread uptake. These were: greatly improving the ease with which models could be a) made linkable, b) linked and c) run; developing a methodology for applying integrated modelling; developing practical options for calibrating and validating linked models; addressing the science issues that arise when models are linked; extending the range of modelling concepts that can be linked; enabling interface standards to pass uncertainty information; making the interface standards platform independent; extending the range of platforms to include those for high performance computing; developing the concept of modelling components as web services; separating simulation code from the model’s GUI, so that all the results from the linked models can be viewed through a single GUI; developing scenario management systems so that that there is an audit trail of the version of each model and dataset used in each linked model run. In addition to the above, there is a need to build a set of integrated

  7. A parallel nearly implicit time-stepping scheme

    OpenAIRE

    Botchev, Mike A.; van der Vorst, Henk A.

    2001-01-01

    Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable. The purpose of this article is to give an overall assessment of the pa...

  8. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    Science.gov (United States)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  9. Multiple time step integrators in ab initio molecular dynamics

    International Nuclear Information System (INIS)

    Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.

    2014-01-01

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy

  10. Time step size limitation introduced by the BSSN Gamma Driver

    Energy Technology Data Exchange (ETDEWEB)

    Schnetter, Erik, E-mail: schnetter@cct.lsu.ed [Department of Physics and Astronomy, Louisiana State University, LA (United States)

    2010-08-21

    Many mesh refinement simulations currently performed in numerical relativity counteract instabilities near the outer boundary of the simulation domain either by changes to the mesh refinement scheme or by changes to the gauge condition. We point out that the BSSN Gamma Driver gauge condition introduces a time step size limitation in a similar manner as a Courant-Friedrichs-Lewy condition, but which is independent of the spatial resolution. We give a didactic explanation of this issue, show why, especially, mesh refinement simulations suffer from it, and point to a simple remedy. (note)

  11. Genetic demixing and evolution in linear stepping stone models

    Science.gov (United States)

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.

    2010-04-01

    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial

  12. Modeling the stepping mechanism in negative lightning leaders

    Science.gov (United States)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  13. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.

    2010-01-01

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  14. HIA, the next step: Defining models and roles

    International Nuclear Information System (INIS)

    Putters, Kim

    2005-01-01

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways and responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking

  15. Time dependent theory of two-step absorption of two pulses

    Energy Technology Data Exchange (ETDEWEB)

    Rebane, Inna, E-mail: inna.rebane@ut.ee

    2015-09-25

    The time dependent theory of two step-absorption of two different light pulses with arbitrary duration in the electronic three-level model is proposed. The probability that the third level is excited at the moment t is found in depending on the time delay between pulses, the spectral widths of the pulses and the energy relaxation constants of the excited electronic levels. The time dependent perturbation theory is applied without using “doorway–window” approach. The time and spectral behavior of the spectrum using in calculations as simple as possible model is analyzed. - Highlights: • Time dependent theory of two-step absorption in the three-level model is proposed. • Two different light pulses with arbitrary duration is observed. • The time dependent perturbation theory is applied without “door–window” approach. • The time and spectral behavior of the spectra is analyzed for several cases.

  16. The construction of geological model using an iterative approach (Step 1 and Step 2)

    International Nuclear Information System (INIS)

    Matsuoka, Toshiyuki; Kumazaki, Naoki; Saegusa, Hiromitsu; Sasaki, Keiichi; Endo, Yoshinobu; Amano, Kenji

    2005-03-01

    One of the main goals of the Mizunami Underground Research Laboratory (MIU) Project is to establish appropriate methodologies for reliably investigating and assessing the deep subsurface. This report documents the results of geological modeling of Step 1 and Step 2 using the iterative investigation approach at the site-scale (several 100m to several km in area). For the Step 1 model, existing information (e.g. literature), and results from geological mapping and reflection seismic survey were used. For the Step 2 model, additional information obtained from the geological investigation using existing borehole and the shallow borehole investigation were incorporated. As a result of this study, geological elements that should be represented in the model were defined, and several major faults with trends of NNW, EW and NE trend were identified (or inferred) in the vicinity of the MIU-site. (author)

  17. Positivity-preserving dual time stepping schemes for gas dynamics

    Science.gov (United States)

    Parent, Bernard

    2018-05-01

    A new approach at discretizing the temporal derivative of the Euler equations is here presented which can be used with dual time stepping. The temporal discretization stencil is derived along the lines of the Cauchy-Kowalevski procedure resulting in cross differences in spacetime but with some novel modifications which ensure the positivity of the discretization coefficients. It is then shown that the so-obtained spacetime cross differences result in changes to the wave speeds and can thus be incorporated within Roe or Steger-Warming schemes (with and without reconstruction-evolution) simply by altering the eigenvalues. The proposed approach is advantaged over alternatives in that it is positivity-preserving for the Euler equations. Further, it yields monotone solutions near discontinuities while exhibiting a truncation error in smooth regions less than the one of the second- or third-order accurate backward-difference-formula (BDF) for either small or large time steps. The high resolution and positivity preservation of the proposed discretization stencils are independent of the convergence acceleration technique which can be set to multigrid, preconditioning, Jacobian-free Newton-Krylov, block-implicit, etc. Thus, the current paper also offers the first implicit integration of the time-accurate Euler equations that is positivity-preserving in the strict sense (that is, the density and temperature are guaranteed to remain positive). This is in contrast to all previous positivity-preserving implicit methods which only guaranteed the positivity of the density, not of the temperature or pressure. Several stringent reacting and inert test cases confirm the positivity-preserving property of the proposed method as well as its higher resolution and higher computational efficiency over other second-order and third-order implicit temporal discretization strategies.

  18. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  19. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  20. Stepwise hydrogeological modeling and groundwater flow analysis on site scale (Step 0 and Step 1)

    International Nuclear Information System (INIS)

    Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori

    2005-05-01

    One of the main goals of the Mizunami Underground Research Laboratory Project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. To achieve this goal, a variety of investigations, analysis, and evaluations have been conducted using an iterative approach. In this study, hydrogeological modeling and ground water flow analyses have been carried out using the data from surface-based investigations at Step 0 and Step 1, in order to synthesize the investigation results, to evaluate the uncertainty of the hydrogeological model, and to specify items for further investigation. The results of this study are summarized as follows: 1) As the investigation progresses Step 0 to Step 1, the understanding of groundwater flow was enhanced from Step 0 to Step 1, and the hydrogeological model could be revised, 2) The importance of faults as major groundwater flow pathways was demonstrated, 3) Geological and hydrogeological characteristics of faults with orientation of NNW and NE were shown to be especially significant. The main item specified for further investigations is summarized as follows: geological and hydrogeological characteristics of NNW and NE trending faults are important. (author)

  1. A model for two-step ageing

    Indian Academy of Sciences (India)

    Home; Journals; Bulletin of Materials Science; Volume 23; Issue 5 ... precipitates to accentuate the mechanical properties and resistance to stress corrosion cracking. ... In the present work, a model is developed which takes into account the ...

  2. Modelling step-families: exploratory findings.

    Science.gov (United States)

    Bartlema, J

    1988-01-01

    "A combined macro-micro model is applied to a population similar to that forecast for 2035 in the Netherlands in order to simulate the effect on kinship networks of a mating system of serial monogamy. The importance of incorporating a parameter for the degree of concentration of childbearing over the female population is emphasized. The inputs to the model are vectors of fertility rates by age of mother, and by age of father, a matrix of first-marriage rates by age of both partners (used in the macro-analytical expressions), and two parameters H and S (used in the micro-simulation phase). The output is a data base of hypothetical individuals, whose records contain identification number, age, sex, and the identification numbers of their relatives." (SUMMARY IN FRE) excerpt

  3. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  4. Factors affecting GEBV accuracy with single-step Bayesian models.

    Science.gov (United States)

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  5. The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays

    Energy Technology Data Exchange (ETDEWEB)

    Gadella, M. [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain); Kuru, Ş. [Department of Physics, Faculty of Science, Ankara University, 06100 Ankara (Turkey); Negro, J., E-mail: jnegro@fta.uva.es [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain)

    2017-04-15

    We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays for the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.

  6. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  7. Diffraction model of a step-out transition

    Energy Technology Data Exchange (ETDEWEB)

    Chao, A.W.; Zimmermann, F.

    1996-06-01

    The diffraction model of a cavity, suggested by Lawson, Bane and Sands is generalized to a step out transition. Using this model, the high frequency impedance is calculated explicitly for the case that the transition step is small compared with the beam pipe radius. In the diffraction model for a small step out transition, the total energy is conserved, but, unlike the cavity case, the diffracted waves in the geometric shadow and the pipe region, in general, do not always carry equal energy. In the limit of small step sizes, the impedance derived from the diffraction model agrees with that found by Balakin, Novokhatsky and also Kheifets. This impedance can be used to compute the wake field of a round collimator whose half aperture is much larger than the bunch length, as existing in the SLC final focus.

  8. Advances in sequential data assimilation and numerical weather forecasting: An Ensemble Transform Kalman-Bucy Filter, a study on clustering in deterministic ensemble square root filters, and a test of a new time stepping scheme in an atmospheric model

    Science.gov (United States)

    Amezcua, Javier

    in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.

  9. Development of a real time activity monitoring Android application utilizing SmartStep.

    Science.gov (United States)

    Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward

    2016-08-01

    Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.

  10. Steps of Supercritical Fluid Extraction of Natural Products and Their Characteristic Times

    OpenAIRE

    Sovová, H. (Helena)

    2012-01-01

    Kinetics of supercritical fluid extraction (SFE) from plants is variable due to different micro-structure of plants and their parts, different properties of extracted substances and solvents, and different flow patterns in the extractor. Variety of published mathematical models for SFE of natural products corresponds to this diversification. This study presents simplified equations of extraction curves in terms of characteristic times of four single extraction steps: internal diffusion, exter...

  11. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    Science.gov (United States)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for

  12. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  13. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    International Nuclear Information System (INIS)

    Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.

    2017-01-01

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determined to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.

  14. Coherent states for the time dependent harmonic oscillator: the step function

    International Nuclear Information System (INIS)

    Moya-Cessa, Hector; Fernandez Guasti, Manuel

    2003-01-01

    We study the time evolution for the quantum harmonic oscillator subjected to a sudden change of frequency. It is based on an approximate analytic solution to the time dependent Ermakov equation for a step function. This approach allows for a continuous treatment that differs from former studies that involve the matching of two time independent solutions at the time when the step occurs

  15. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  16. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  17. Step-indexed Kripke models over recursive worlds

    DEFF Research Database (Denmark)

    Birkedal, Lars; Reus, Bernhard; Schwinghammer, Jan

    2011-01-01

    worlds that are recursively defined in a category of metric spaces. In this paper, we broaden the scope of this technique from the original domain-theoretic setting to an elementary, operational one based on step indexing. The resulting method is widely applicable and leads to simple, succinct models...

  18. Problem Resolution through Electronic Mail: A Five-Step Model.

    Science.gov (United States)

    Grandgenett, Neal; Grandgenett, Don

    2001-01-01

    Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…

  19. Comparison of time stepping schemes on the cable equation

    Directory of Open Access Journals (Sweden)

    Chuan Li

    2010-09-01

    Full Text Available Electrical propagation in excitable tissue, such as nerve fibers and heart muscle, is described by a parabolic PDE for the transmembrane voltage $V(x,t$, known as the cable equation, $$ frac{1}{r_a}frac{partial^2V}{partial x^2} = C_mfrac{partial V}{partial t} + I_{m ion}(V,t + I_{m stim}(t $$ where $r_a$ and $C_m$ are the axial resistance and membrane capacitance. The source term $I_{m ion}$ represents the total ionic current across the membrane, governed by the Hodgkin-Huxley or other more complicated ionic models. $I_{m stim}(t$ is an applied stimulus current.

  20. Testing a stepped care model for binge-eating disorder: a two-step randomized controlled trial.

    Science.gov (United States)

    Tasca, Giorgio A; Koszycki, Diana; Brugnera, Agostino; Chyurlia, Livia; Hammond, Nicole; Francis, Kylie; Ritchie, Kerri; Ivanova, Iryna; Proulx, Genevieve; Wilson, Brian; Beaulac, Julie; Bissada, Hany; Beasley, Erin; Mcquaid, Nancy; Grenon, Renee; Fortin-Langelier, Benjamin; Compare, Angelo; Balfour, Louise

    2018-05-24

    A stepped care approach involves patients first receiving low-intensity treatment followed by higher intensity treatment. This two-step randomized controlled trial investigated the efficacy of a sequential stepped care approach for the psychological treatment of binge-eating disorder (BED). In the first step, all participants with BED (n = 135) received unguided self-help (USH) based on a cognitive-behavioral therapy model. In the second step, participants who remained in the trial were randomized either to 16 weeks of group psychodynamic-interpersonal psychotherapy (GPIP) (n = 39) or to a no-treatment control condition (n = 46). Outcomes were assessed for USH in step 1, and then for step 2 up to 6-months post-treatment using multilevel regression slope discontinuity models. In the first step, USH resulted in large and statistically significant reductions in the frequency of binge eating. Statistically significant moderate to large reductions in eating disorder cognitions were also noted. In the second step, there was no difference in change in frequency of binge eating between GPIP and the control condition. Compared with controls, GPIP resulted in significant and large improvement in attachment avoidance and interpersonal problems. The findings indicated that a second step of a stepped care approach did not significantly reduce binge-eating symptoms beyond the effects of USH alone. The study provided some evidence for the second step potentially to reduce factors known to maintain binge eating in the long run, such as attachment avoidance and interpersonal problems.

  1. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  2. Solving point reactor kinetic equations by time step-size adaptable numerical methods

    International Nuclear Information System (INIS)

    Liao Chaqing

    2007-01-01

    Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)

  3. Implementation of Real-Time Machining Process Control Based on Fuzzy Logic in a New STEP-NC Compatible System

    Directory of Open Access Journals (Sweden)

    Po Hu

    2016-01-01

    Full Text Available Implementing real-time machining process control at shop floor has great significance on raising the efficiency and quality of product manufacturing. A framework and implementation methods of real-time machining process control based on STEP-NC are presented in this paper. Data model compatible with ISO 14649 standard is built to transfer high-level real-time machining process control information between CAPP systems and CNC systems, in which EXPRESS language is used to define new STEP-NC entities. Methods for implementing real-time machining process control at shop floor are studied and realized on an open STEP-NC controller, which is developed using object-oriented, multithread, and shared memory technologies conjunctively. Cutting force at specific direction of machining feature in side mill is chosen to be controlled object, and a fuzzy control algorithm with self-adjusting factor is designed and embedded in the software CNC kernel of STEP-NC controller. Experiments are carried out to verify the proposed framework, STEP-NC data model, and implementation methods for real-time machining process control. The results of experiments prove that real-time machining process control tasks can be interpreted and executed correctly by the STEP-NC controller at shop floor, in which actual cutting force is kept around ideal value, whether axial cutting depth changes suddenly or continuously.

  4. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  5. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    Science.gov (United States)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  6. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  7. Quantummechanical multi-step direct models for nuclear data applications

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-10-01

    Various multi-step direct models have been derived and compared on a theoretical level. Subsequently, these models have been implemented in the computer code system KAPSIES, enabling a consistent comparison on the basis of the same set of nuclear parameters and same set of numerical techniques. Continuum cross sections in the energy region between 10 and several hundreds of MeV have successfully been analysed. Both angular distributions and energy spectra can be predicted in an essentially parameter-free manner. It is demonstrated that the quantum-mechanical MSD models (in particular the FKK model) give an improved prediction of pre-equilibrium angular distributions as compared to the experiment-based systematics of Kalbach. This makes KAPSIES a reliable tool for nuclear data applications in the afore-mentioned energy region. (author). 10 refs., 2 figs

  8. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    Science.gov (United States)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  9. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Science.gov (United States)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  10. Some Comments on the Behavior of the RELAP5 Numerical Scheme at Very Small Time Steps

    International Nuclear Information System (INIS)

    Tiselj, Iztok; Cerne, Gregor

    2000-01-01

    The behavior of the RELAP5 code at very short time steps is described, i.e., δt [approximately equal to] 0.01 δx/c. First, the property of the RELAP5 code to trace acoustic waves with 'almost' second-order accuracy is demonstrated. Quasi-second-order accuracy is usually achieved for acoustic waves at very short time steps but can never be achieved for the propagation of nonacoustic temperature and void fraction waves. While this feature may be beneficial for the simulations of fast transients describing pressure waves, it also has an adverse effect: The lack of numerical diffusion at very short time steps can cause typical second-order numerical oscillations near steep pressure jumps. This behavior explains why an automatic halving of the time step, which is used in RELAP5 when numerical difficulties are encountered, in some cases leads to the failure of the simulation.Second, the integration of the stiff interphase exchange terms in RELAP5 is studied. For transients with flashing and/or rapid condensation as the main phenomena, results strongly depend on the time step used. Poor accuracy is achieved with 'normal' time steps (δt [approximately equal to] δx/v) because of the very short characteristic timescale of the interphase mass and heat transfer sources. In such cases significantly different results are predicted with very short time steps because of the more accurate integration of the stiff interphase exchange terms

  11. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  12. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  13. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    Science.gov (United States)

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  14. Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET

    Directory of Open Access Journals (Sweden)

    B. Ghahraman

    2016-02-01

    Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0

  15. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    Science.gov (United States)

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  16. Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism

    International Nuclear Information System (INIS)

    Stolterfoht, N.

    1993-01-01

    The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)

  17. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    Energy Technology Data Exchange (ETDEWEB)

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    2014-09-01

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.

  18. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  19. Modelling a New Product Model on the Basis of an Existing STEP Application Protocol

    Directory of Open Access Journals (Sweden)

    B.-R. Hoehn

    2005-01-01

    Full Text Available During the last years a great range of computer aided tools has been generated to support the development process of various products. The goal of a continuous data flow, needed for high efficiency, requires powerful standards for the data exchange. At the FZG (Gear Research Centre of the Technical University of Munich there was a need for a common gear data format for data exchange between gear calculation programs. The STEP standard ISO 10303 was developed for this type of purpose, but a suitable definition of gear data was still missing, even in the Application Protocol AP 214, developed for the design process in the automotive industry. The creation of a new STEP Application Protocol or the extension of existing protocol would be a very time consumpting normative process. So a new method was introduced by FZG. Some very general definitions of an Application Protocol (here AP 214 were used to determine rules for an exact specification of the required kind of data. In this case a product model for gear units was defined based on elements of the AP 214. Therefore no change of the Application Protocol is necessary. Meanwhile the product model for gear units has been published as a VDMA paper and successfully introduced for data exchange within the German gear industry associated with FVA (German Research Organisation for Gears and Transmissions. This method can also be adopted for other applications not yet sufficiently defined by STEP

  20. One-step electrodeposition process of CuInSe2: Deposition time effect

    Indian Academy of Sciences (India)

    Administrator

    CuInSe2 thin films were prepared by one-step electrodeposition process using a simplified two- electrodes system. ... homojunctions or heterojunctions (Rincon et al 1983). Efficiency of ... deposition times onto indium thin oxide (ITO)-covered.

  1. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  2. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  3. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  4. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  5. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  6. Stabilization of a three-dimensional limit cycle walking model through step-to-step ankle control.

    Science.gov (United States)

    Kim, Myunghee; Collins, Steven H

    2013-06-01

    Unilateral, below-knee amputation is associated with an increased risk of falls, which may be partially related to a loss of active ankle control. If ankle control can contribute significantly to maintaining balance, even in the presence of active foot placement, this might provide an opportunity to improve balance using robotic ankle-foot prostheses. We investigated ankle- and hip-based walking stabilization methods in a three-dimensional model of human gait that included ankle plantarflexion, ankle inversion-eversion, hip flexion-extension, and hip ad/abduction. We generated discrete feedback control laws (linear quadratic regulators) that altered nominal actuation parameters once per step. We used ankle push-off, lateral ankle stiffness and damping, fore-aft foot placement, lateral foot placement, or all of these as control inputs. We modeled environmental disturbances as random, bounded, unexpected changes in floor height, and defined balance performance as the maximum allowable disturbance value for which the model walked 500 steps without falling. Nominal walking motions were unstable, but were stabilized by all of the step-to-step control laws we tested. Surprisingly, step-by-step modulation of ankle push-off alone led to better balance performance (3.2% leg length) than lateral foot placement (1.2% leg length) for these control laws. These results suggest that appropriate control of robotic ankle-foot prosthesis push-off could make balancing during walking easier for individuals with amputation.

  7. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    Science.gov (United States)

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  8. Ultra-structural time-course study in the C. elegans model for Duchenne muscular dystrophy highlights a crucial role for sarcomere-anchoring structures and sarcolemma integrity in the earliest steps of the muscle degeneration process.

    Science.gov (United States)

    Brouilly, Nicolas; Lecroisey, Claire; Martin, Edwige; Pierson, Laura; Mariol, Marie-Christine; Qadota, Hiroshi; Labouesse, Michel; Streichenberger, Nathalie; Mounier, Nicole; Gieseler, Kathrin

    2015-11-15

    Duchenne muscular dystrophy (DMD) is a genetic disease characterized by progressive muscle degeneration due to mutations in the dystrophin gene. In spite of great advances in the design of curative treatments, most patients currently receive palliative therapies with steroid molecules such as prednisone or deflazacort thought to act through their immunosuppressive properties. These molecules only slightly slow down the progression of the disease and lead to severe side effects. Fundamental research is still needed to reveal the mechanisms involved in the disease that could be exploited as therapeutic targets. By studying a Caenorhabditis elegans model for DMD, we show here that dystrophin-dependent muscle degeneration is likely to be cell autonomous and affects the muscle cells the most involved in locomotion. We demonstrate that muscle degeneration is dependent on exercise and force production. Exhaustive studies by electron microscopy allowed establishing for the first time the chronology of subcellular events occurring during the entire process of muscle degeneration. This chronology highlighted the crucial role for dystrophin in stabilizing sarcomeric anchoring structures and the sarcolemma. Our results suggest that the disruption of sarcomeric anchoring structures and sarcolemma integrity, observed at the onset of the muscle degeneration process, triggers subcellular consequences that lead to muscle cell death. An ultra-structural analysis of muscle biopsies from DMD patients suggested that the chronology of subcellular events established in C. elegans models the pathogenesis in human. Finally, we found that the loss of sarcolemma integrity was greatly reduced after prednisone treatment suggesting a role for this molecule in plasma membrane stabilization. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    Science.gov (United States)

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  10. Modeling and Simulation of Out of Step Blocking Relay for

    Directory of Open Access Journals (Sweden)

    Ahmed A. Al Adwani

    2013-05-01

    Full Text Available  This paper investigates a power swing effect on a distance protection relay performance installed on (HV/EHV transmission line as well as power system stability. A conventional distance relay can’t properly operate under transient stability conditions; therefore, it cause mol-operation, and it will adversely impact on its trip signals. To overcome this problem, the Out Of Step (OOS relay has modeled and simulated to joint with distance relay to supervise and control on its trip signals response. The setting characteristics technique of the OOS based on concentric polygons scheme method to detect power swing under transient stability situation.         This study ia a modeling and simulating using (Maltab\\ Simulink software. A Two relays had been  performed   and  tested  with  two equivalents network connected to ends.      The results of this study showed an activity and reliability of this way to control the distance relay response under a transient stability conditions and it indicated the possibility to find out faults which may occur at period of power swing

  11. Step-by-Step Model for the Study of the Apriori Algorithm for Predictive Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Grigore ROŞCA

    2015-06-01

    Full Text Available The goal of this paper was to develop an educational oriented application based on the Data Mining Apriori Algorithm which facilitates both the research and the study of data mining by graduate students. The application could be used to discover interesting patterns in the corpus of data and to measure the impact on the speed of execution as a function of problem constraints (value of support and confidence variables or size of the transactional data-base. The paper presents a brief overview of the Apriori Algorithm, aspects about the implementation of the algorithm using a step-by-step process, a discussion of the education-oriented user interface and the process of data mining of a test transactional data base. The impact of some constraints on the speed of the algorithm is also experimentally measured without a systematic review of different approaches to increase execution speed. Possible applications of the implementation, as well as its limits, are briefly reviewed.

  12. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  13. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  14. A permeation theory for single-file ion channels: one- and two-step models.

    Science.gov (United States)

    Nelson, Peter Hugo

    2011-04-28

    How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no

  15. Evaluating Bank Profitability in Ghana: A five step Du-Pont Model Approach

    Directory of Open Access Journals (Sweden)

    Baah Aye Kusi

    2015-09-01

    Full Text Available We investigate bank profitability in Ghana using periods before, during and after the globe financial crises with the five step du-pont model for the first time.We adapt the variable of the five step du-pont model to explain bank profitability with a panel data of twenty-five banks in Ghana from 2006 to 2012. To ensure meaningful generalization robust errors fixed and random effects models are used.Our empirical results suggests that bank operating activities (operating profit margin, bank efficiency (asset turnover, bank leverage (asset to equity and financing cost (interest burden  were positive and significant determinants of bank profitability (ROE during the period of study implying that bank in Ghana can boost return to equity holders through the above mentioned variables. We further report that the five step du-pont model better explains the total variation (94% in bank profitability in Ghana as compared to earlier findings suggesting that bank specific variables are keen in explaining ROE in banks in Ghana.We cited no empirical study that has employed five step du-pont model making our study unique and different from earlier studies as we assert that bank specific variables are core to explaining bank profitability.                

  16. On an adaptive time stepping strategy for solving nonlinear diffusion equations

    International Nuclear Information System (INIS)

    Chen, K.; Baines, M.J.; Sweby, P.K.

    1993-01-01

    A new time step selection procedure is proposed for solving non- linear diffusion equations. It has been implemented in the ASWR finite element code of Lorenz and Svoboda [10] for 2D semiconductor process modelling diffusion equations. The strategy is based on equi-distributing the local truncation errors of the numerical scheme. The use of B-splines for interpolation (as well as for the trial space) results in a banded and diagonally dominant matrix. The approximate inverse of such a matrix can be provided to a high degree of accuracy by another banded matrix, which in turn can be used to work out the approximate finite difference scheme corresponding to the ASWR finite element method, and further to calculate estimates of the local truncation errors of the numerical scheme. Numerical experiments on six full simulation problems arising in semiconductor process modelling have been carried out. Results show that our proposed strategy is more efficient and better conserves the total mass. 18 refs., 6 figs., 2 tabs

  17. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  18. The large discretization step method for time-dependent partial differential equations

    Science.gov (United States)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  19. Theoretical intercomparison of multi-step direct reaction models and computational intercomparison of multi-step direct reaction models

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-08-01

    In recent years several statistical theories have been developed concerning multistep direct (MSD) nuclear reactions. In addition, dominant in applications is a whole class of semiclassical models that may be subsumed under the heading of 'generalized exciton models'. These are basically MSD-type extensions on top of compound-like concepts. In this report the relationship between their underlying statistical MSD-postulates is highlighted. A command framework is outlined that enables to generate the various MSD theories through assigning statistical properties to different parts of the nuclear Hamiltonian. Then it is shown that distinct forms of nuclear randomness are embodied in the mentioned theories. All these theories appear to be very similar at a qualitative level. In order to explain the high energy-tails and forward-peaked angular distribution typical for particles emitted in MSD reactions, it is imagined that the incident continuum particle stepwise looses its energy and direction in a sequence of collisions, thereby creating new particle-hole pairs in the target system. At each step emission may take place. The statistical aspect comes in because many continuum states are involved in the process. These are supposed to display chaotic behavior, the associated randomness assumption giving rise to important simplifications in the expression for MSD emission cross sections. This picture suggests that mentioned MSD models can be interpreted as a variant of essentially one and the same theory. However, this appears not to be the case. To show this usual MSD distinction within the composite reacting nucleus between the fast continuum particle and the residual interactions, the nucleons of the residual core are to be distinguished from those of the leading particle with the residual system. This distinction will turn out to be crucial to present analysis. 27 refs.; 5 figs.; 1 tab

  20. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2009-01-01

    an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second

  1. Displacement in the parameter space versus spurious solution of discretization with large time step

    International Nuclear Information System (INIS)

    Mendes, Eduardo; Letellier, Christophe

    2004-01-01

    In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics

  2. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    Science.gov (United States)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  3. Modelling urban travel times

    NARCIS (Netherlands)

    Zheng, F.

    2011-01-01

    Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain

  4. Elderly fallers enhance dynamic stability through anticipatory postural adjustments during a choice stepping reaction time

    Directory of Open Access Journals (Sweden)

    Romain Tisserand

    2016-11-01

    Full Text Available In the case of disequilibrium, the capacity to step quickly is critical to avoid falling for elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT, where elderly fallers (F take longer to step than elderly non-fallers (NF. However, reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA that elderly F develop in a stepping context and their consequences on the dynamic stability. 44 community-dwelling elderly subjects (20 F and 22 NF performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP; in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall.

  5. step by step process from logic model to case study method as an ...

    African Journals Online (AJOL)

    Global Journal

    Logic models and case study approach to programme evaluation have proven ... in qualitative methodology. There is ... Note: IEHPs= internationally educated health professionals, ... interviews with the programme managers. .... programme assessed to ensure that the IEHPs are ready to face the certification ..... Comparison.

  6. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  7. A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams

    Science.gov (United States)

    Molnar, P.

    2012-04-01

    variability in the system response by the processes of grain blocking and step collapse. The temporal correlation in input and output rates and the number of grains stored in the system at any given time are quantified by spectral analysis and statistics of long-range dependence. Although the model is only conceptually conceived to represent the real processes of step formation and collapse, connections will be made between the modelling results and some field and laboratory data on step-pool systems. The main focus in the discussion will be to demonstrate how even in such a simple model the processes of grain blocking and step collapse may impact the sediment transport rates to the point that certain changes in input are not visible anymore, along the lines of "shredding the signals" proposed by Jerolmack and Paola (2010). The consequences are that the notions of stability and equilibrium, the attribution of cause and effect, and the timescales of process and form in step-pool systems, and perhaps in many other fluvial systems, may have very limited applicability.

  8. Measuring border delay and crossing times at the US-Mexico border : part II. Step-by-step guidelines for implementing a radio frequency identification (RFID) system to measure border crossing and wait times.

    Science.gov (United States)

    2012-06-01

    The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...

  9. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-01

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings

  10. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  11. Stepping toward a Macaque Model of HIV-1 Induced AIDS

    Directory of Open Access Journals (Sweden)

    Jason T. Kimata

    2014-09-01

    Full Text Available HIV-1 exhibits a narrow host range, hindering the development of a robust animal model of pathogenesis. Past studies have demonstrated that the restricted host range of HIV-1 may be largely due to the inability of the virus to antagonize and evade effector molecules of the interferon response in other species. They have also guided the engineering of HIV-1 clones that can replicate in CD4 T-cells of Asian macaque species. However, while replication of these viruses in macaque hosts is persistent, it has been limited and without progression to AIDS. In a new study, Hatziioannou et al., demonstrate for the first time that adapted macaque-tropic HIV-1 can persistently replicate at high levels in pigtailed macaques (Macaca nemestrina, but only if CD8 T-cells are depleted at the time of inoculation. The infection causes rapid disease and recapitulates several aspects of AIDS in humans. Additionally, the virus undergoes genetic changes to further escape innate immunity in association with disease progression. Here, the importance of these findings is discussed, as they relate to pathogenesis and model development.

  12. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  13. Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    H. Vincent Poor

    2008-05-01

    Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.

  14. Empirical Modeling of Oxygen Uptake of Flow Over Stepped Chutes ...

    African Journals Online (AJOL)

    The present investigation evaluates the influence of three different step chute geometry when skimming flow was allowed over them with the aim of determining the aerated flow length which is a significant factor when developing empirical equations for estimating aeration efficiency of flow. Overall, forty experiments were ...

  15. An Improved Split-Step Wavelet Transform Method for Anomalous Radio Wave Propagation Modelling

    Directory of Open Access Journals (Sweden)

    A. Iqbal

    2014-12-01

    Full Text Available Anomalous tropospheric propagation caused by ducting phenomenon is a major problem in wireless communication. Thus, it is important to study the behavior of radio wave propagation in tropospheric ducts. The Parabolic Wave Equation (PWE method is considered most reliable to model anomalous radio wave propagation. In this work, an improved Split Step Wavelet transform Method (SSWM is presented to solve PWE for the modeling of tropospheric propagation over finite and infinite conductive surfaces. A large number of numerical experiments are carried out to validate the performance of the proposed algorithm. Developed algorithm is compared with previously published techniques; Wavelet Galerkin Method (WGM and Split-Step Fourier transform Method (SSFM. A very good agreement is found between SSWM and published techniques. It is also observed that the proposed algorithm is about 18 times faster than WGM and provide more details of propagation effects as compared to SSFM.

  16. Oncology Modeling for Fun and Profit! Key Steps for Busy Analysts in Health Technology Assessment.

    Science.gov (United States)

    Beca, Jaclyn; Husereau, Don; Chan, Kelvin K W; Hawkins, Neil; Hoch, Jeffrey S

    2018-01-01

    In evaluating new oncology medicines, two common modeling approaches are state transition (e.g., Markov and semi-Markov) and partitioned survival. Partitioned survival models have become more prominent in oncology health technology assessment processes in recent years. Our experience in conducting and evaluating models for economic evaluation has highlighted many important and practical pitfalls. As there is little guidance available on best practices for those who wish to conduct them, we provide guidance in the form of 'Key steps for busy analysts,' who may have very little time and require highly favorable results. Our guidance highlights the continued need for rigorous conduct and transparent reporting of economic evaluations regardless of the modeling approach taken, and the importance of modeling that better reflects reality, which includes better approaches to considering plausibility, estimating relative treatment effects, dealing with post-progression effects, and appropriate characterization of the uncertainty from modeling itself.

  17. Assessment of radiopacity of restorative composite resins with various target distances and exposure times and a modified aluminum step wedge

    Energy Technology Data Exchange (ETDEWEB)

    Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)

    2012-09-15

    ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.

  18. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  19. DEFORMATION DEPENDENT TUL MULTI-STEP DIRECT MODEL

    International Nuclear Information System (INIS)

    WIENKE, H.; CAPOTE, R.; HERMAN, M.; SIN, M.

    2007-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended in order to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, ''deformed'' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the ''spherical'' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations

  20. Deformation dependent TUL multi-step direct model

    International Nuclear Information System (INIS)

    Wienke, H.; Capote, R.; Herman, M.; Sin, M.

    2008-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, 'deformed' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the 'spherical' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations. (authors)

  1. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  2. Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng

    2014-04-01

    A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.

  3. ADDING A NEW STEP WITH SPATIAL AUTOCORRELATION TO IMPROVE THE FOUR-STEP TRAVEL DEMAND MODEL WITH FEEDBACK FOR A DEVELOPING CITY

    Directory of Open Access Journals (Sweden)

    Xuesong FENG, Ph.D Candidate

    2009-01-01

    Full Text Available It is expected that improvement of transport networks could give rise to the change of spatial distributions of population-related factors and car ownership, which are expected to further influence travel demand. To properly reflect such an interdependence mechanism, an aggregate multinomial logit (A-MNL model was firstly applied to represent the spatial distributions of these exogenous variables of the travel demand model by reflecting the influence of transport networks. Next, the spatial autocorrelation analysis is introduced into the log-transformed A-MNL model (called SPA-MNL model. Thereafter, the SPA-MNL model is integrated into the four-step travel demand model with feedback (called 4-STEP model. As a result, an integrated travel demand model is newly developed and named as the SPA-STEP model. Using person trip data collected in Beijing, the performance of the SPA-STEP model is empirically compared with the 4-STEP model. It was proven that the SPA-STEP model is superior to the 4-STEP model in accuracy; most of the estimated parameters showed statistical differences in values. Moreover, though the results of the simulations to the same set of assumed scenarios by the 4-STEP model and the SPA-STEP model consistently suggested the same sustainable path for the future development of Beijing, it was found that the environmental sustainability and the traffic congestion for these scenarios were generally overestimated by the 4-STEP model compared with the corresponding analyses by the SPA-STEP model. Such differences were clearly generated by the introduction of the new modeling step with spatial autocorrelation.

  4. Real-time modeling of heat distributions

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  5. A Step-indexed Semantic Model of Types for the Call-by-Name Lambda Calculus

    OpenAIRE

    Meurer, Benedikt

    2011-01-01

    Step-indexed semantic models of types were proposed as an alternative to purely syntactic safety proofs using subject-reduction. Building upon the work by Appel and others, we introduce a generalized step-indexed model for the call-by-name lambda calculus. We also show how to prove type safety of general recursion in our call-by-name model.

  6. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  7. Rotor cascade shape optimization with unsteady passing wakes using implicit dual time stepping method

    Science.gov (United States)

    Lee, Eun Seok

    2000-10-01

    An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with

  8. Specification of a STEP Based Reference Model for Exchange of Robotics Models

    DEFF Research Database (Denmark)

    Haenisch, Jochen; Kroszynski, Uri; Ludwig, Arnold

    robot programming, the descriptions of geometry, kinematics, robotics, dynamics, and controller data using STEP are addressed as major goals of the project.The Project Consortium has now released the "Specificatin of a STEP Based Reference Model for Exchange of Robotics Models" on which a series......ESPRIT Project 6457: "Interoperability of Standards for Robotics in CIME" (InterRob) belongs to the Subprogram "Computer Integrated Manufacturing and Engineering" of ESPRIT, the European Specific Programme for Research and Development in Information Technology supported by the European Commision....... InterRob aims to develop an integrated solution to precision manufacturing by combining product data and database technologies with robotic off-line programming and simulation. Benefits arise from the use of high level simulation tools and developing standards for the exchange of product model data...

  9. Smart Wireless Power Transfer Operated by Time-Modulated Arrays via a Two-Step Procedure

    Directory of Open Access Journals (Sweden)

    Diego Masotti

    2015-01-01

    Full Text Available The paper introduces a novel method for agile and precise wireless power transmission operated by a time-modulated array. The unique, almost real-time reconfiguration capability of these arrays is fully exploited by a two-step procedure: first, a two-element time-modulated subarray is used for localization of tagged sensors to be energized; the entire 16-element TMA then provides the power to the detected tags, by exploiting the fundamental and first-sideband harmonic radiation. An investigation on the best array architecture is carried out, showing the importance of the adopted nonlinear/full-wave computer-aided-design platform. Very promising simulated energy transfer performance of the entire nonlinear radiating system is demonstrated.

  10. Associations of office workers' objectively assessed occupational sitting, standing and stepping time with musculoskeletal symptoms.

    Science.gov (United States)

    Coenen, Pieter; Healy, Genevieve N; Winkler, Elisabeth A H; Dunstan, David W; Owen, Neville; Moodie, Marj; LaMontagne, Anthony D; Eakin, Elizabeth A; O'Sullivan, Peter B; Straker, Leon M

    2018-04-22

    We examined the association of musculoskeletal symptoms (MSS) with workplace sitting, standing and stepping time, as well as sitting and standing time accumulation (i.e. usual bout duration of these activities), measured objectively with the activPAL3 monitor. Using baseline data from the Stand Up Victoria trial (216 office workers, 14 workplaces), cross-sectional associations of occupational activities with self-reported MSS (low-back, upper and lower extremity symptoms in the last three months) were examined using probit regression, correcting for clustering and adjusting for confounders. Sitting bout duration was significantly (p < 0.05) associated, non-linearly, with MSS, such that those in the middle tertile displayed the highest prevalence of upper extremity symptoms. Other associations were non-significant but sometimes involved large differences in symptom prevalence (e.g. 38%) by activity. Though causation is unclear, these non-linear associations suggest that sitting and its alternatives (i.e. standing and stepping) interact with MSS and this should be considered when designing safe work systems. Practitioner summary: We studied associations of objectively assessed occupational activities with musculoskeletal symptoms in office workers. Workers who accumulated longer sitting bouts reported fewer upper extremity symptoms. Total activity duration was not significantly associated with musculoskeletal symptoms. We underline the importance of considering total volumes and patterns of activity time in musculoskeletal research.

  11. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    Science.gov (United States)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  12. A one-step, real-time PCR assay for rapid detection of rhinovirus.

    Science.gov (United States)

    Do, Duc H; Laus, Stella; Leber, Amy; Marcon, Mario J; Jordan, Jeanne A; Martin, Judith M; Wadowsky, Robert M

    2010-01-01

    One-step, real-time PCR assays for rhinovirus have been developed for a limited number of PCR amplification platforms and chemistries, and some exhibit cross-reactivity with genetically similar enteroviruses. We developed a one-step, real-time PCR assay for rhinovirus by using a sequence detection system (Applied Biosystems; Foster City, CA). The primers were designed to amplify a 120-base target in the noncoding region of picornavirus RNA, and a TaqMan (Applied Biosystems) degenerate probe was designed for the specific detection of rhinovirus amplicons. The PCR assay had no cross-reactivity with a panel of 76 nontarget nucleic acids, which included RNAs from 43 enterovirus strains. Excellent lower limits of detection relative to viral culture were observed for the PCR assay by using 38 of 40 rhinovirus reference strains representing different serotypes, which could reproducibly detect rhinovirus serotype 2 in viral transport medium containing 10 to 10,000 TCID(50) (50% tissue culture infectious dose endpoint) units/ml of the virus. However, for rhinovirus serotypes 59 and 69, the PCR assay was less sensitive than culture. Testing of 48 clinical specimens from children with cold-like illnesses for rhinovirus by the PCR and culture assays yielded detection rates of 16.7% and 6.3%, respectively. For a batch of 10 specimens, the entire assay was completed in 4.5 hours. This real-time PCR assay enables detection of many rhinovirus serotypes with the Applied Biosystems reagent-instrument platform.

  13. Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential

    International Nuclear Information System (INIS)

    Zhang Ying; Liang Haozhao; Meng Jie

    2009-01-01

    The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.

  14. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  15. The enhancement of time-stepping procedures in SYVAC A/C

    International Nuclear Information System (INIS)

    Broyd, T.W.

    1986-01-01

    This report summarises the work carried out an SYVAC A/C between February and May 1985 aimed at improving the way in which time-stepping procedures are handled. The majority of the work was concerned with three types of problem, viz: i) Long vault release, short geosphere response ii) Short vault release, long geosphere response iii) Short vault release, short geosphere response The report contains details of changes to the logic and structure of SYVAC A/C, as well as the results of code implementation tests. It has been written primarily for members of the UK SYVAC development team, and should not be used or referred to in isolation. (author)

  16. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...... and the model has been extended to fit these data....

  17. An adaptive spatio-temporal smoothing model for estimating trends and step changes in disease risk

    OpenAIRE

    Rushworth, Alastair; Lee, Duncan; Sarran, Christophe

    2014-01-01

    Statistical models used to estimate the spatio-temporal pattern in disease\\ud risk from areal unit data represent the risk surface for each time period with known\\ud covariates and a set of spatially smooth random effects. The latter act as a proxy\\ud for unmeasured spatial confounding, whose spatial structure is often characterised by\\ud a spatially smooth evolution between some pairs of adjacent areal units while other\\ud pairs exhibit large step changes. This spatial heterogeneity is not c...

  18. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  19. Modeling of SBS Phase Conjugation in Multimode Step Index Fibers

    National Research Council Canada - National Science Library

    Spring, Justin B

    2008-01-01

    ... limited, double-pass high-power amplifiers or coherent beam combination. Little modeling of such a fiber-based phase-conjugator has been done, making it difficult to make decisions about the right fiber to use...

  20. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Joná s D.; Sen, Mrinal K.

    2010-01-01

    popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM

  1. Sharp Penalty Term and Time Step Bounds for the Interior Penalty Discontinuous Galerkin Method for Linear Hyperbolic Problems

    NARCIS (Netherlands)

    Geevers, Sjoerd; van der Vegt, J.J.W.

    2017-01-01

    We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured

  2. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment.

    Science.gov (United States)

    Bunce, David; Haynes, Becky I; Lord, Stephen R; Gschwind, Yves J; Kochan, Nicole A; Reppermund, Simone; Brodaty, Henry; Sachdev, Perminder S; Delbaere, Kim

    2017-06-01

    Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI) are at high risk of falling, the association between IIV and prospective falls is unknown. We conducted a longitudinal cohort study in cognitively intact (n = 271) and MCI (n = 154) community-dwelling adults aged 70-90 years. IIV was assessed through a variety of measures including simple and choice hand reaction time and choice stepping reaction time tasks (CSRT), the latter administered as a single task and also with a secondary working memory task. Logistic regression did not show an association between IIV on the hand-held tasks and falls. Greater IIV in both CSRT tasks, however, did significantly increase the risk of future falls. This effect was specific to the MCI group, with a stronger effect in persons exhibiting gait, posture, or physiological impairment. The findings suggest that increased stepping IIV may indicate compromised neural circuitry involved in executive function, gait, and posture in persons with MCI increasing their risk of falling. IIV measures have potential to assess neurobiological disturbance underlying physical and cognitive dysfunction in old age, and aid fall risk assessment and routine care in community and health care settings. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    Science.gov (United States)

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  5. Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations

    Directory of Open Access Journals (Sweden)

    H. Nakajima

    2006-01-01

    Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.

  6. Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations

    Science.gov (United States)

    Nakajima, H.; Stadler, A. T.

    2006-10-01

    Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.

  7. TMS field modelling-status and next steps

    DEFF Research Database (Denmark)

    Thielscher, Axel

    2013-01-01

    In the recent years, an increasing number of studies used geometrically accurate head models and finite element (FEM) or finite difference methods (FDM) to estimate the electric field induced by non-invasive neurostimulation techniques such as transcranial magnetic stimulation (TMS) or transcranial...

  8. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  9. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location

    Science.gov (United States)

    Bancroft, Matthew J.; Day, Brian L.

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body’s momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait. PMID:28066208

  10. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location.

    Science.gov (United States)

    Bancroft, Matthew J; Day, Brian L

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.

  11. Step training in a rat model for complex aneurysmal vascular microsurgery

    Directory of Open Access Journals (Sweden)

    Martin Dan

    2015-12-01

    Full Text Available Introduction: Microsurgery training is a key step for the young neurosurgeons. Both in vascular and peripheral nerve pathology, microsurgical techniques are useful tools for the proper treatment. Many training models have been described, including ex vivo (chicken wings and in vivo (rat, rabbit ones. Complex microsurgery training include termino-terminal vessel anastomosis and nerve repair. The aim of this study was to describe a reproducible complex microsurgery training model in rats. Materials and methods: The experimental animals were Brown Norway male rats between 10-16 weeks (average 13 and weighing between 250-400g (average 320g. We performed n=10 rat hind limb replantations. The surgical steps and preoperative management are carefully described. We evaluated the vascular patency by clinical assessment-color, temperature, capillary refill. The rats were daily inspected for any signs of infections. The nerve regeneration was assessed by foot print method. Results: There were no case of vascular compromise or autophagia. All rats had long term survival (>90 days. The nerve regeneration was clinically completed at 6 months postoperative. The mean operative time was 183 minutes, and ischemia time was 25 minutes.

  12. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    International Nuclear Information System (INIS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-01-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  13. Experimental and modeling study on relation of pedestrian step length and frequency under different headways

    Science.gov (United States)

    Zeng, Guang; Cao, Shuchao; Liu, Chi; Song, Weiguo

    2018-06-01

    It is important to study pedestrian stepping behavior and characteristics for facility design and pedestrian flow study due to pedestrians' bipedal movement. In this paper, data of steps are extracted based on trajectories of pedestrians from a single-file experiment. It is found that step length and step frequency will decrease 75% and 33%, respectively, when global density increases from 0.46 ped/m to 2.28 ped/m. With the increment of headway, they will first increase and then remain constant when the headway is beyond 1.16 m and 0.91 m, respectively. Step length and frequency under different headways can be described well by normal distributions. Meanwhile, relationships between step length and frequency under different headways exist. Step frequency decreases with the increment of step length. However, the decrease tendencies depend on headways as a whole. And there are two decrease tendencies: when the headway is between about 0.6 m and 1.0 m, the decrease rate of the step frequency will increase with the increment of step length; while it will decrease when the headway is beyond about 1.0 m and below about 0.6 m. A model is built based on the experiment results. In fundamental diagrams, the results of simulation agree well with those of experiment. The study can be helpful for understanding pedestrian stepping behavior and designing public facilities.

  14. Effect of the processing steps on compositions of table olive since harvesting time to pasteurization.

    Science.gov (United States)

    Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A

    2013-08-01

    Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process.

  15. Avoid the tsunami of the Dirac sea in the imaginary time step method

    International Nuclear Information System (INIS)

    Zhang, Ying; Liang, Haozhao; Meng, Jie

    2010-01-01

    The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)

  16. Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation

    International Nuclear Information System (INIS)

    Boer, J. de; Dannhaueser, G.

    1982-01-01

    The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)

  17. Detection and Correction of Step Discontinuities in Kepler Flux Time Series

    Science.gov (United States)

    Kolodziejczak, J. J.; Morris, R. L.

    2011-01-01

    PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].

  18. Improving stability of stabilized and multiscale formulations in flow simulations at small time steps

    KAUST Repository

    Hsu, Ming-Chen

    2010-02-01

    The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.

  19. Using Variable Dwell Time to Accelerate Gaze-based Web Browsing with Two-step Selection

    OpenAIRE

    Chen, Zhaokang; Shi, Bertram E.

    2017-01-01

    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose an algorithm for adjusting the dwell times of different objects based on the inferred probability that the user intends to select them. In particular, we introduce a probabilistic model of natural gaze behavior while sur...

  20. Impedance models in time domain

    NARCIS (Netherlands)

    Rienstra, S.W.

    2005-01-01

    Necessary conditions for an impedance function are derived. Methods available in the literature are discussed. A format with recipe is proposed for an exact impedance condition in time domain on a time grid, based on the Helmholtz resonator model. An explicit solution is given of a pulse reflecting

  1. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  2. Comparison between time-step-integration and probabilistic methods in seismic analysis of a linear structure

    International Nuclear Information System (INIS)

    Schneeberger, B.; Breuleux, R.

    1977-01-01

    Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)

  3. Detection of Tomato black ring virus by real-time one-step RT-PCR.

    Science.gov (United States)

    Harper, Scott J; Delmiglio, Catia; Ward, Lisa I; Clover, Gerard R G

    2011-01-01

    A TaqMan-based real-time one-step RT-PCR assay was developed for the rapid detection of Tomato black ring virus (TBRV), a significant plant pathogen which infects a wide range of economically important crops. Primers and a probe were designed against existing genomic sequences to amplify a 72 bp fragment from RNA-2. The assay amplified all isolates of TBRV tested, but no amplification was observed from the RNA of other nepovirus species or healthy host plants. The detection limit of the assay was estimated to be around nine copies of the TBRV target region in total RNA. A comparison with conventional RT-PCR and ELISA, indicated that ELISA, the current standard test method, lacked specificity and reacted to all nepovirus species tested, while conventional RT-PCR was approximately ten-fold less sensitive than the real-time RT-PCR assay. Finally, the real-time RT-PCR assay was tested using five different RT-PCR reagent kits and was found to be robust and reliable, with no significant differences in sensitivity being found. The development of this rapid assay should aid in quarantine and post-border surveys for regulatory agencies. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  5. Continuous-Time Random Walk with multi-step memory: an application to market dynamics

    Science.gov (United States)

    Gubiec, Tomasz; Kutner, Ryszard

    2017-11-01

    An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  6. Numerical Investigation of Transitional Flow over a Backward Facing Step Using a Low Reynolds Number k-ε Model

    DEFF Research Database (Denmark)

    Skovgaard, M.; Nielsen, Peter V.

    In this paper it is investigated if it is possible to simulate and capture some of the low Reynolds number effects numerically using time averaged momentum equations and a low Reynolds number k-f model. The test case is the larninar to turbulent transitional flow over a backward facing step...

  7. Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.

    Science.gov (United States)

    Rogers, Mark W; Mille, Marie-Laure

    2016-08-15

    Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  8. Local time stepping with the discontinuous Galerkin method for wave propagation in 3D heterogeneous media

    NARCIS (Netherlands)

    Minisini, S.; Zhebel, E.; Kononov, A.; Mulder, W.A.

    2013-01-01

    Modeling and imaging techniques for geophysics are extremely demanding in terms of computational resources. Seismic data attempt to resolve smaller scales and deeper targets in increasingly more complex geologic settings. Finite elements enable accurate simulation of time-dependent wave propagation

  9. Probabilistic Survivability Versus Time Modeling

    Science.gov (United States)

    Joyner, James J., Sr.

    2016-01-01

    This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.

  10. Color Shift Modeling of Light-Emitting Diode Lamps in Step-Loaded Stress Testing

    OpenAIRE

    Cai, Miao; Yang, Daoguo; Huang, J.; Zhang, Maofen; Chen, Xianping; Liang, Caihang; Koh, S.W.; Zhang, G.Q.

    2017-01-01

    The color coordinate shift of light-emitting diode (LED) lamps is investigated by running three stress-loaded testing methods, namely step-up stress accelerated degradation testing, step-down stress accelerated degradation testing, and constant stress accelerated degradation testing. A power model is proposed as the statistical model of the color shift (CS) process of LED products. Consequently, a CS mechanism constant is obtained for detecting the consistency of CS mechanisms among various s...

  11. Enriching step-based product information models to support product life-cycle activities

    Science.gov (United States)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  12. A Step Forward to Closing the Loop between Static and Dynamic Reservoir Modeling

    Directory of Open Access Journals (Sweden)

    Cancelliere M.

    2014-12-01

    Full Text Available The current trend for history matching is to find multiple calibrated models instead of a single set of model parameters that match the historical data. The advantage of several current workflows involving assisted history matching techniques, particularly those based on heuristic optimizers or direct search, is that they lead to a number of calibrated models that partially address the problem of the non-uniqueness of the solutions. The importance of achieving multiple solutions is that calibrated models can be used for a true quantification of the uncertainty affecting the production forecasts, which represent the basis for technical and economic risk analysis. In this paper, the importance of incorporating the geological uncertainties in a reservoir study is demonstrated. A workflow, which includes the analysis of the uncertainty associated with the facies distribution for a fluvial depositional environment in the calibration of the numerical dynamic models and, consequently, in the production forecast, is presented. The first step in the workflow was to generate a set of facies realizations starting from different conceptual models. After facies modeling, the petrophysical properties were assigned to the simulation domains. Then, each facies realization was calibrated separately by varying permeability and porosity fields. Data assimilation techniques were used to calibrate the models in a reasonable span of time. Results showed that even the adoption of a conceptual model for facies distribution clearly representative of the reservoir internal geometry might not guarantee reliable results in terms of production forecast. Furthermore, results also showed that realizations which seem fully acceptable after calibration were not representative of the true reservoir internal configuration and provided wrong production forecasts; conversely, realizations which did not show a good fit of the production data could reliably predict the reservoir

  13. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  14. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    Science.gov (United States)

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  15. Stepping reaction time and gait adaptability are significantly impaired in people with Parkinson's disease: Implications for fall risk.

    Science.gov (United States)

    Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C

    2018-02-01

    Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    Science.gov (United States)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  17. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  18. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  19. Physiological and cognitive mediators for the association between self-reported depressed mood and impaired choice stepping reaction time in older people.

    NARCIS (Netherlands)

    Kvelde, T.; Pijnappels, M.A.G.M.; Delbaere, K.; Close, J.C.; Lord, S.R.

    2010-01-01

    Background. The aim of the study was to use path analysis to test a theoretical model proposing that the relationship between self-reported depressed mood and choice stepping reaction time (CSRT) is mediated by psychoactive medication use, physiological performance, and cognitive ability.A total of

  20. Parameter Estimations and Optimal Design of Simple Step-Stress Model for Gamma Dual Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Hamdy Mohamed Salem

    2018-03-01

    Full Text Available This paper considers life-testing experiments and how it is effected by stress factors: namely temperature, electricity loads, cycling rate and pressure. A major type of accelerated life tests is a step-stress model that allows the experimenter to increase stress levels more than normal use during the experiment to see the failure items. The test items are assumed to follow Gamma Dual Weibull distribution. Different methods for estimating the parameters are discussed. These include Maximum Likelihood Estimations and Confidence Interval Estimations which is based on asymptotic normality generate narrow intervals to the unknown distribution parameters with high probability. MathCAD (2001 program is used to illustrate the optimal time procedure through numerical examples.

  1. Time Alignment as a Necessary Step in the Analysis of Sleep Probabilistic Curves

    Science.gov (United States)

    Rošt'áková, Zuzana; Rosipal, Roman

    2018-02-01

    Sleep can be characterised as a dynamic process that has a finite set of sleep stages during the night. The standard Rechtschaffen and Kales sleep model produces discrete representation of sleep and does not take into account its dynamic structure. In contrast, the continuous sleep representation provided by the probabilistic sleep model accounts for the dynamics of the sleep process. However, analysis of the sleep probabilistic curves is problematic when time misalignment is present. In this study, we highlight the necessity of curve synchronisation before further analysis. Original and in time aligned sleep probabilistic curves were transformed into a finite dimensional vector space, and their ability to predict subjects' age or daily measures is evaluated. We conclude that curve alignment significantly improves the prediction of the daily measures, especially in the case of the S2-related sleep states or slow wave sleep.

  2. Long-wave model for strongly anisotropic growth of a crystal step.

    Science.gov (United States)

    Khenner, Mikhail

    2013-08-01

    A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.

  3. Timing of the steps in transformation of C3H 10T1/2 cells by X-irradiation

    International Nuclear Information System (INIS)

    Kennedy, A.R.; Cairns, J.; Little, J.B.

    1984-01-01

    Transformation of cells in culture by chemical carcinogens or X-rays seems to require at least two steps. The initial step is a frequent event; for example, after transient exposure to either methylcholanthrene or X-rays. It has been hypothesized that the second step behaves like a spontaneous mutation in having a constant but small probability of occurring each time an initiated cell divides. We show here that the clone size distribution of transformed cells in growing cultures initiated by X-rays, is, indeed, exactly what would be expected on that hypothesis. (author)

  4. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Jonás D.

    2010-04-01

    We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.

  5. Genomic prediction in a nuclear population of layers using single-step models.

    Science.gov (United States)

    Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning

    2018-02-01

    Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.

  6. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  7. Methods for Assessing Item, Step, and Threshold Invariance in Polytomous Items Following the Partial Credit Model

    Science.gov (United States)

    Penfield, Randall D.; Myers, Nicholas D.; Wolfe, Edward W.

    2008-01-01

    Measurement invariance in the partial credit model (PCM) can be conceptualized in several different but compatible ways. In this article the authors distinguish between three forms of measurement invariance in the PCM: step invariance, item invariance, and threshold invariance. Approaches for modeling these three forms of invariance are proposed,…

  8. A stochastic step model of replicative senescence explains ROS production rate in ageing cell populations.

    Directory of Open Access Journals (Sweden)

    Conor Lawless

    Full Text Available Increases in cellular Reactive Oxygen Species (ROS concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells.One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage. However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS. We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence.

  9. A stochastic step model of replicative senescence explains ROS production rate in ageing cell populations.

    Science.gov (United States)

    Lawless, Conor; Jurk, Diana; Gillespie, Colin S; Shanley, Daryl; Saretzki, Gabriele; von Zglinicki, Thomas; Passos, João F

    2012-01-01

    Increases in cellular Reactive Oxygen Species (ROS) concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells.One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage). However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS). We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence.

  10. Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2016-01-01

    Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....

  11. One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values

    Directory of Open Access Journals (Sweden)

    Jin Xiao

    2014-01-01

    Full Text Available Scientific customer value segmentation (CVS is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM model. On the one hand, ODCEM integrates the preprocess of missing values and the classification modeling into one step; on the other hand, it utilizes multiple classifiers ensemble technology in constructing the classification models. The empirical results in credit scoring dataset “German” from UCI and the real customer churn prediction dataset “China churn” show that the ODCEM outperforms four commonly used “two-step” models and the ensemble based model LMF and can provide better decision support for market managers.

  12. Evaluation of hydrodynamic ocean models as a first step in larval dispersal modelling

    Science.gov (United States)

    Vasile, Roxana; Hartmann, Klaas; Hobday, Alistair J.; Oliver, Eric; Tracey, Sean

    2018-01-01

    Larval dispersal modelling, a powerful tool in studying population connectivity and species distribution, requires accurate estimates of the ocean state, on a high-resolution grid in both space (e.g. 0.5-1 km horizontal grid) and time (e.g. hourly outputs), particularly of current velocities and water temperature. These estimates are usually provided by hydrodynamic models based on which larval trajectories and survival are computed. In this study we assessed the accuracy of two hydrodynamic models around Australia - Bluelink ReANalysis (BRAN) and Hybrid Coordinate Ocean Model (HYCOM) - through comparison with empirical data from the Australian National Moorings Network (ANMN). We evaluated the models' predictions of seawater parameters most relevant to larval dispersal - temperature, u and v velocities and current speed and direction - on the continental shelf where spawning and nursery areas for major fishery species are located. The performance of each model in estimating ocean parameters was found to depend on the parameter investigated and to vary from one geographical region to another. Both BRAN and HYCOM models systematically overestimated the mean water temperature, particularly in the top 140 m of water column, with over 2 °C bias at some of the mooring stations. HYCOM model was more accurate than BRAN for water temperature predictions in the Great Australian Bight and along the east coast of Australia. Skill scores between each model and the in situ observations showed lower accuracy in the models' predictions of u and v ocean current velocities compared to water temperature predictions. For both models, the lowest accuracy in predicting ocean current velocities, speed and direction was observed at 200 m depth. Low accuracy of both model predictions was also observed in the top 10 m of the water column. BRAN had more accurate predictions of both u and v velocities in the upper 50 m of water column at all mooring station locations. While HYCOM

  13. Modeling and Design of MPPT Controller Using Stepped P&O Algorithm in Solar Photovoltaic System

    OpenAIRE

    R. Prakash; B. Meenakshipriya; R. Kumaravelan

    2014-01-01

    This paper presents modeling and simulation of Grid Connected Photovoltaic (PV) system by using improved mathematical model. The model is used to study different parameter variations and effects on the PV array including operating temperature and solar irradiation level. In this paper stepped P&O algorithm is proposed for MPPT control. This algorithm will identify the suitable duty ratio in which the DC-DC converter should be operated to maximize the power output. Photo voltaic array with pro...

  14. Modelling and Fixed Step Simulation of a Turbo Charged Diesel Engine

    OpenAIRE

    Ritzén, Jesper

    2003-01-01

    Having an engine model that is accurate but not too complicated is desirable when working with on-board diagnosis or engine control. In this thesis a four state mean value model is introduced. To make the model usable in an on-line automotive application it is discrete and simulated with a fixed step size solver. Modelling is done with simplicity as main object. Some simple static models are also presented. To validate the model measuring is carried out in a Scania R124LB truck with a 12 lit...

  15. Coconut Model for Learning First Steps of Craniotomy Techniques and Cerebrospinal Fluid Leak Avoidance.

    Science.gov (United States)

    Drummond-Braga, Bernardo; Peleja, Sebastião Berquó; Macedo, Guaracy; Drummond, Carlos Roberto S A; Costa, Pollyana H V; Garcia-Zapata, Marco T; Oliveira, Marcelo Magaldi

    2016-12-01

    Neurosurgery simulation has gained attention recently due to changes in the medical system. First-year neurosurgical residents in low-income countries usually perform their first craniotomy on a real subject. Development of high-fidelity, cheap, and largely available simulators is a challenge in residency training. An original model for the first steps of craniotomy with cerebrospinal fluid leak avoidance practice using a coconut is described. The coconut is a drupe from Cocos nucifera L. (coconut tree). The green coconut has 4 layers, and some similarity can be seen between these layers and the human skull. The materials used in the simulation are the same as those used in the operating room. The coconut is placed on the head holder support with the face up. The burr holes are made until endocarp is reached. The mesocarp is dissected, and the conductor is passed from one hole to the other with the Gigli saw. The hook handle for the wire saw is positioned, and the mesocarp and endocarp are cut. After sawing the 4 margins, mesocarp is detached from endocarp. Four burr holes are made from endocarp to endosperm. Careful dissection of the endosperm is done, avoiding liquid albumen leak. The Gigli saw is passed through the trephine holes. Hooks are placed, and the endocarp is cut. After cutting the 4 margins, it is dissected from the endosperm and removed. The main goal of the procedure is to remove the endocarp without fluid leakage. The coconut model for learning the first steps of craniotomy and cerebrospinal fluid leak avoidance has some limitations. It is more realistic while trying to remove the endocarp without damage to the endosperm. It is also cheap and can be widely used in low-income countries. However, the coconut does not have anatomic landmarks. The mesocarp makes the model less realistic because it has fibers that make the procedure more difficult and different from a real craniotomy. The model has a potential pedagogic neurosurgical application for

  16. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  17. Rotor Cascade Shape Optimization with Unsteady Passing Wakes Using Implicit Dual-Time Stepping and a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Eun Seok Lee

    2003-01-01

    Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.

  18. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  19. First steps towards modelling high burnup effect in UO{sub 2} fuel

    Energy Technology Data Exchange (ETDEWEB)

    O` Carroll, C; Lassmann, K; Laar, J Van De; Walker, C T [CEC Joint Research Centre, Karlsruhe (Germany)

    1997-08-01

    High burnup initiates a process that can lead to major microstructural changes near the edge of the fuel: formation of subgrains, the loss of matrix fission gas and an increase in porosity. A consequence of this, is a decrease of thermal conductivity near the edge of the fuel which may be major implications for the performance of LWR fuels at higher burnup. The mechanism for the changes in grain structure, the apparent depletion of Xe and increase in porosity is associated with the high fission density at the fuel periphery. This is in turn due to the preferential capture of epithermal neutrons in the resonances of {sup 238}U. The new model TUBRNP predicts the radial burnup profile as a function of time together with the radial profile of plutonium. The model has been validated with data from LWR UO{sub 2} fuels with enrichments in the range 2 to 8.25% and burnups between 21 to 75 Gwd/t. It has been reported that at high burnup EPMA measures a sharp decrease in the concentration of Xe near the fuel surface. This loss of Xe is interpreted as a signal that the gas has been swept out of the original grains into pores: this ``missing`` Xe has been measured by XRF. It has been noted experimentally that the restructuring (Xe depletion and changes in grain structure) have an onset threshold local burnup in the region of 70 to 80 GWd/t: a specific value was taken for use in the model. For a given fuel TUBRNP predicts the local burnup profile, and the depth corresponding to the threshold value is taken to be the thickness of the Xe depleted region. The theoretical predictions have been compared with experimental data. The results are presented and should be seen as a first step in the development of a more detailed model of this phenomenon. (author). 22 refs, 9 figs, 2 tabs.

  20. Development and evaluation of a real-time one step Reverse-Transcriptase PCR for quantitation of Chandipura Virus

    Directory of Open Access Journals (Sweden)

    Tandale Babasaheb V

    2008-12-01

    Full Text Available Abstract Background Chandipura virus (CHPV, a member of family Rhabdoviridae was attributed to an explosive outbreak of acute encephalitis in children in Andhra Pradesh, India in 2003 and a small outbreak among tribal children from Gujarat, Western India in 2004. The case-fatality rate ranged from 55–75%. Considering the rapid progression of the disease and high mortality, a highly sensitive method for quantifying CHPV RNA by real-time one step reverse transcriptase PCR (real-time one step RT-PCR using TaqMan technology was developed for rapid diagnosis. Methods Primers and probe for P gene were designed and used to standardize real-time one step RT-PCR assay for CHPV RNA quantitation. Standard RNA was prepared by PCR amplification, TA cloning and run off transcription. The optimized real-time one step RT-PCR assay was compared with the diagnostic nested RT-PCR and different virus isolation systems [in vivo (mice in ovo (eggs, in vitro (Vero E6, PS, RD and Sand fly cell line] for the detection of CHPV. Sensitivity and specificity of real-time one step RT-PCR assay was evaluated with diagnostic nested RT-PCR, which is considered as a gold standard. Results Real-time one step RT-PCR was optimized using in vitro transcribed (IVT RNA. Standard curve showed linear relationship for wide range of 102-1010 (r2 = 0.99 with maximum Coefficient of variation (CV = 5.91% for IVT RNA. The newly developed real-time RT-PCR was at par with nested RT-PCR in sensitivity and superior to cell lines and other living systems (embryonated eggs and infant mice used for the isolation of the virus. Detection limit of real-time one step RT-PCR and nested RT-PCR was found to be 1.2 × 100 PFU/ml. RD cells, sand fly cells, infant mice, and embryonated eggs showed almost equal sensitivity (1.2 × 102 PFU/ml. Vero and PS cell-lines (1.2 × 103 PFU/ml were least sensitive to CHPV infection. Specificity of the assay was found to be 100% when RNA from other viruses or healthy

  1. Rotordynamic analysis for stepped-labyrinth gas seals using moody's friction-factor model

    International Nuclear Information System (INIS)

    Ha, Tae Woong

    2001-01-01

    The governing equations are derived for the analysis of a stepped labyrinth gas seal generally used in high performance compressors, gas turbines, and steam turbines. The bulk-flow is assumed for a single cavity control volume set up in a stepped labyrinth cavity and the flow is assumed to be completely turbulent in the circumferential direction. The Moody's wall-friction-factor model is used for the calculation of wall shear stresses in the single cavity control volume. For the reaction force developed by the stepped labyrinth gas seal, linearized zeroth-order and first-order perturbation equations are developed for small motion about a centered position. Integration of the resultant first-order pressure distribution along and around the seal defines the rotordynamic coefficients of the stepped labyrinth gas seal. The resulting leakage and rotordynamic characteristics of the stepped labyrinth gas seal are presented and compared with Scharrer's theoretical analysis using Blasius' wall-friction-factor model. The present analysis shows a good qualitative agreement of leakage characteristics with Scharrer's analysis, but underpredicts by about 20 %. For the rotordynamic coefficients, the present analysis generally yields smaller predicted values compared with Scharrer's analysis

  2. Bereday and Hilker: Origins of the "Four Steps of Comparison" Model

    Science.gov (United States)

    Adick, Christel

    2018-01-01

    The article draws attention to the forgotten ancestry of the "four steps of comparison" model (description--interpretation--juxtaposition--comparison). Comparativists largely attribute this to George Z. F. Bereday [1964. "Comparative Method in Education." New York: Holt, Rinehart and Winston], but among German scholars, it is…

  3. Finite cluster renormalization and new two step renormalization group for Ising model

    International Nuclear Information System (INIS)

    Benyoussef, A.; El Kenz, A.

    1989-09-01

    New types of renormalization group theory using the generalized Callen identities are exploited in the study of the Ising model. Another type of two-step renormalization is proposed. Critical couplings and critical exponents y T and y H are calculated by these methods for square and simple cubic lattices, using different size clusters. (author). 17 refs, 2 tabs

  4. The cc-bar and bb-bar spectroscopy in the two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.; Kaiserslautern Univ.

    1984-07-01

    We investigate the spectroscopy of the charmonium (cc-bar) and bottonium (bb-bar) bound states in a static flavour independent nonrelativistic quark-antiquark (qq-bar) two-step potential model proposed earlier. Our predictions are in good agreement with experimental data and with other theoretical predictions. (author)

  5. Modelling noninvasively measured cerebral signals during a hypoxemia challenge: steps towards individualised modelling.

    Directory of Open Access Journals (Sweden)

    Beth Jelfs

    Full Text Available Noninvasive approaches to measuring cerebral circulation and metabolism are crucial to furthering our understanding of brain function. These approaches also have considerable potential for clinical use "at the bedside". However, a highly nontrivial task and precondition if such methods are to be used routinely is the robust physiological interpretation of the data. In this paper, we explore the ability of a previously developed model of brain circulation and metabolism to explain and predict quantitatively the responses of physiological signals. The five signals all noninvasively-measured during hypoxemia in healthy volunteers include four signals measured using near-infrared spectroscopy along with middle cerebral artery blood flow measured using transcranial Doppler flowmetry. We show that optimising the model using partial data from an individual can increase its predictive power thus aiding the interpretation of NIRS signals in individuals. At the same time such optimisation can also help refine model parametrisation and provide confidence intervals on model parameters. Discrepancies between model and data which persist despite model optimisation are used to flag up important questions concerning the underlying physiology, and the reliability and physiological meaning of the signals.

  6. First-time whole blood donation: A critical step for donor safety and retention on first three donations.

    Science.gov (United States)

    Gillet, P; Rapaille, A; Benoît, A; Ceinos, M; Bertrand, O; de Bouyalsky, I; Govaerts, B; Lambermont, M

    2015-01-01

    Whole blood donation is generally safe although vasovagal reactions can occur (approximately 1%). Risk factors are well known and prevention measures are shown as efficient. This study evaluates the impact of the donor's retention in relation to the occurrence of vasovagal reaction for the first three blood donations. Our study of data collected over three years evaluated the impact of classical risk factors and provided a model including the best combination of covariates predicting VVR. The impact of a reaction at first donation on return rate and complication until the third donation was evaluated. Our data (523,471 donations) confirmed the classical risk factors (gender, age, donor status and relative blood volume). After stepwise variable selection, donor status, relative blood volume and their interaction were the only remaining covariates in the model. Of 33,279 first-time donors monitored over a period of at least 15 months, the first three donations were followed. Data emphasised the impact of complication at first donation. The return rate for a second donation was reduced and the risk of vasovagal reaction was increased at least until the third donation. First-time donation is a crucial step in the donors' career. Donors who experienced a reaction at their first donation have a lower return rate for a second donation and a higher risk of vasovagal reaction at least until the third donation. Prevention measures have to be processed to improve donor retention and provide blood banks with adequate blood supply. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  7. Iteratively improving Hi-C experiments one step at a time.

    Science.gov (United States)

    Golloshi, Rosela; Sanders, Jacob T; McCord, Rachel Patton

    2018-04-30

    The 3D organization of eukaryotic chromosomes affects key processes such as gene expression, DNA replication, cell division, and response to DNA damage. The genome-wide chromosome conformation capture (Hi-C) approach can characterize the landscape of 3D genome organization by measuring interaction frequencies between all genomic regions. Hi-C protocol improvements and rapid advances in DNA sequencing power have made Hi-C useful to study diverse biological systems, not only to elucidate the role of 3D genome structure in proper cellular function, but also to characterize genomic rearrangements, assemble new genomes, and consider chromatin interactions as potential biomarkers for diseases. Yet, the Hi-C protocol is still complex and subject to variations at numerous steps that can affect the resulting data. Thus, there is still a need for better understanding and control of factors that contribute to Hi-C experiment success and data quality. Here, we evaluate recently proposed Hi-C protocol modifications as well as often overlooked variables in sample preparation and examine their effects on Hi-C data quality. We examine artifacts that can occur during Hi-C library preparation, including microhomology-based artificial template copying and chimera formation that can add noise to the downstream data. Exploring the mechanisms underlying Hi-C artifacts pinpoints steps that should be further optimized in the future. To improve the utility of Hi-C in characterizing the 3D genome of specialized populations of cells or small samples of primary tissue, we identify steps prone to DNA loss which should be considered to adapt Hi-C to lower cell numbers. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Seven steps to raise world security. Op-Ed, published in the Finanical Times

    International Nuclear Information System (INIS)

    ElBaradei, M.

    2005-01-01

    In recent years, three phenomena have radically altered the security landscape. They are the emergence of a nuclear black market, the determined efforts by more countries to acquire technology to produce the fissile material usable in nuclear weapons and the clear desire of terrorists to acquire weapons of mass destruction. The IAEA has been trying to solve these new problems with existing tools. But for every step forward, we have exposed vulnerabilities in the system. The system itself - the regime that implements non-proliferation treaty needs reinforcement. Some of the necessary remedies can be taken in New York at the Meeting to be held in May, but only if governments are ready to act. With seven straightforward steps, and without amending the treaty, this conference could reach a milestone in strengthening world security. The first step: put a five-year hold on additional facilities for uranium enrichment and plutonium separation. Second, speed up existing efforts, led by the US global threat reduction initiative and others, to modify the research reactors worldwide operating with highly enriched uranium - particularly those with metal fuel that could be readily employed as bomb material. Third, raise the bar for inspection standards by establishing the 'additional protocol' as the norm for verifying compliance with the NPT. Fourth, call on the United Nations Security Council to act swiftly and decisively in the case of any country that withdraws from the NPT, in terms of the threat the withdrawal poses to international peace and security. Fifth, urge states to act on the Security Council's recent resolution 1540, to pursue and prosecute any illicit trading in nuclear material and technology. Sixth, call on the five nuclear weapon states party to the NPT to accelerate implementation of their 'unequivocal commitment' to nuclear disarmament, building on efforts such as the 2002 Moscow treaty between Russia and the US. Last, acknowledge the volatility of

  9. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  10. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    International Nuclear Information System (INIS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  11. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte-Carlo simulations

    Science.gov (United States)

    Patrone, Paul; Einstein, T. L.; Margetis, Dionisios

    2011-03-01

    We study a 1+1D, stochastic, Burton-Cabrera-Frank (BCF) model of interacting steps fluctuating on a vicinal crystal. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. Our goal is to formulate and validate a self-consistent mean-field (MF) formalism to approximately solve the system of coupled, nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. We derive formulas for the time-dependent terrace width distribution (TWD) and its steady-state limit. By comparison with kinetic Monte-Carlo simulations, we show that our MF formalism improves upon models in which step interactions are linearized. We also indicate how fitting parameters of our steady state MF TWD may be used to determine the mass transport regime and step interaction energy of certain experimental systems. PP and TLE supported by NSF MRSEC under Grant DMR 05-20471 at U. of Maryland; DM supported by NSF under Grant DMS 08-47587.

  12. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    Science.gov (United States)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  13. Three-step management of pneumothorax: time for a re-think on initial management†

    Science.gov (United States)

    Kaneda, Hiroyuki; Nakano, Takahito; Taniguchi, Yohei; Saito, Tomohito; Konobu, Toshifumi; Saito, Yukihito

    2013-01-01

    Pneumothorax is a common disease worldwide, but surprisingly, its initial management remains controversial. There are some published guidelines for the management of spontaneous pneumothorax. However, they differ in some respects, particularly in initial management. In published trials, the objective of treatment has not been clarified and it is not possible to compare the treatment strategies between different trials because of inappropriate evaluations of the air leak. Therefore, there is a need to outline the optimal management strategy for pneumothorax. In this report, we systematically review published randomized controlled trials of the different treatments of primary spontaneous pneumothorax, point out controversial issues and finally propose a three-step strategy for the management of pneumothorax. There are three important characteristics of pneumothorax: potentially lethal respiratory dysfunction; air leak, which is the obvious cause of the disease; frequent recurrence. These three characteristics correspond to the three steps. The central idea of the strategy is that the lung should not be expanded rapidly, unless absolutely necessary. The primary objective of both simple aspiration and chest drainage should be the recovery of acute respiratory dysfunction or the avoidance of respiratory dysfunction and subsequent complications. We believe that this management strategy is simple and clinically relevant and not dependent on the classification of pneumothorax. PMID:23117233

  14. Physical modeling of vortical cross-step flow in the American paddlefish, Polyodon spathula

    Science.gov (United States)

    Brooks, Hannah; Haines, Grant E.; Lin, M. Carly

    2018-01-01

    Vortical cross-step filtration in suspension-feeding fish has been reported recently as a novel mechanism, distinct from other biological and industrial filtration processes. Although crossflow passing over backward-facing steps generates vortices that can suspend, concentrate, and transport particles, the morphological factors affecting this vortical flow have not been identified previously. In our 3D-printed models of the oral cavity for ram suspension-feeding fish, the angle of the backward-facing step with respect to the model’s dorsal midline affected vortex parameters significantly, including rotational, tangential, and axial speed. These vortices were comparable to those quantified downstream of the backward-facing steps that were formed by the branchial arches of preserved American paddlefish in a recirculating flow tank. Our data indicate that vortices in cross-step filtration have the characteristics of forced vortices, as the flow of water inside the oral cavity provides the external torque required to sustain forced vortices. Additionally, we quantified a new variable for ram suspension feeding termed the fluid exit ratio. This is defined as the ratio of the total open pore area for water leaving the oral cavity via spaces between branchial arches that are not blocked by gill rakers, divided by the total area for water entering through the gape during ram suspension feeding. Our experiments demonstrated that the fluid exit ratio in preserved paddlefish was a significant predictor of the flow speeds that were quantified anterior of the rostrum, at the gape, directly dorsal of the first ceratobranchial, and in the forced vortex generated by the first ceratobranchial. Physical modeling of vortical cross-step filtration offers future opportunities to explore the complex interactions between structural features of the oral cavity, vortex parameters, motile particle behavior, and particle morphology that determine the suspension, concentration, and

  15. The treatment of climate science in Integrated Assessment Modelling: integration of climate step function response in an energy system integrated assessment model.

    Science.gov (United States)

    Dessens, Olivier

    2016-04-01

    Integrated Assessment Models (IAMs) are used as crucial inputs to policy-making on climate change. These models simulate aspect of the economy and climate system to deliver future projections and to explore the impact of mitigation and adaptation policies. The IAMs' climate representation is extremely important as it can have great influence on future political action. The step-function-response is a simple climate model recently developed by the UK Met Office and is an alternate method of estimating the climate response to an emission trajectory directly from global climate model step simulations. Good et al., (2013) have formulated a method of reconstructing general circulation models (GCMs) climate response to emission trajectories through an idealized experiment. This method is called the "step-response approach" after and is based on an idealized abrupt CO2 step experiment results. TIAM-UCL is a technology-rich model that belongs to the family of, partial-equilibrium, bottom-up models, developed at University College London to represent a wide spectrum of energy systems in 16 regions of the globe (Anandarajah et al. 2011). The model uses optimisation functions to obtain cost-efficient solutions, in meeting an exogenously defined set of energy-service demands, given certain technological and environmental constraints. Furthermore, it employs linear programming techniques making the step function representation of the climate change response adapted to the model mathematical formulation. For the first time, we have introduced the "step-response approach" method developed at the UK Met Office in an IAM, the TIAM-UCL energy system, and we investigate the main consequences of this modification on the results of the model in term of climate and energy system responses. The main advantage of this approach (apart from the low computational cost it entails) is that its results are directly traceable to the GCM involved and closely connected to well-known methods of

  16. Impact of first-step potential and time on the vertical growth of ZnO nanorods on ITO substrate by two-step electrochemical deposition

    International Nuclear Information System (INIS)

    Kim, Tae Gyoum; Jang, Jin-Tak; Ryu, Hyukhyun; Lee, Won-Jae

    2013-01-01

    Highlights: •We grew vertical ZnO nanorods on ITO substrate using a two-step continuous potential process. •The nucleation for the ZnO nanorods growth was changed by first-step potential and duration. •The vertical ZnO nanorods were well grown when first-step potential was −1.2 V and 10 s. -- Abstract: In this study, we analyzed the growth of ZnO nanorods on an ITO (indium doped tin oxide) substrate by electrochemical deposition using a two-step, continuous potential process. We examined the effect of changing the first-step potential as well as the first-step duration on the morphological, structural and optical properties of ZnO nanorods, measured via using field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD) and photoluminescence (PL), respectively. As a result, vertical ZnO nanorods were grown on ITO substrate without the need for a template when the first-step potential was set to −1.2 V for a duration of 10 s, and the second-step potential was set to −0.7 V for a duration of 1190 s. The ZnO nanorods on this sample showed the highest XRD (0 0 2)/(1 0 0) peak intensity ratio and the highest PL near band edge emission to deep level emission peak intensity ratio (NBE/DLE). In this study, the nucleation for vertical ZnO nanorod growth on an ITO substrate was found to be affected by changes in the first-step potential and first-step duration

  17. Multi-step polynomial regression method to model and forecast malaria incidence.

    Directory of Open Access Journals (Sweden)

    Chandrajit Chatterjee

    Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city

  18. Modelling of epitaxial film growth with an Ehrlich-Schwoebel barrier dependent on the step height

    International Nuclear Information System (INIS)

    Leal, F F; Ferreira, S C; Ferreira, S O

    2011-01-01

    The formation of mounded surfaces in epitaxial growth is attributed to the presence of barriers against interlayer diffusion in the terrace edges, known as Ehrlich-Schwoebel (ES) barriers. We investigate a model for epitaxial growth using an ES barrier explicitly dependent on the step height. Our model has an intrinsic topological step barrier even in the absence of an explicit ES barrier. We show that mounded morphologies can be obtained even for a small barrier while a self-affine growth, consistent with the Villain-Lai-Das Sarma equation, is observed in the absence of an explicit step barrier. The mounded surfaces are described by a super-roughness dynamical scaling characterized by locally smooth (facetted) surfaces and a global roughness exponent α > 1. The thin film limit is featured by surfaces with self-assembled three-dimensional structures having an aspect ratio (height/width) that may increase or decrease with temperature depending on the strength of the step barrier. (fast track communication)

  19. Linear system identification via backward-time observer models

    Science.gov (United States)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  20. Effects of varying the step particle distribution on a probabilistic transport model

    International Nuclear Information System (INIS)

    Bouzat, S.; Farengo, R.

    2005-01-01

    The consequences of varying the step particle distribution on a probabilistic transport model, which captures the basic features of transport in plasmas and was recently introduced in Ref. 1 [B. Ph. van Milligen et al., Phys. Plasmas 11, 2272 (2004)], are studied. Different superdiffusive transport mechanisms generated by a family of distributions with algebraic decays (Tsallis distributions) are considered. It is observed that the possibility of changing the superdiffusive transport mechanism improves the flexibility of the model for describing different situations. The use of the model to describe the low (L) and high (H) confinement modes is also analyzed

  1. Preparatory steps for a robust dynamic model for organically bound tritium dynamics in agricultural crops

    Energy Technology Data Exchange (ETDEWEB)

    Melintescu, A.; Galeriu, D. [' Horia Hulubei' National Institute for Physics and Nuclear Engineering, Bucharest-Magurele (Romania); Diabate, S.; Strack, S. [Institute of Toxicology and Genetics, Karlsruhe Institute of Technology - KIT, Eggenstein-Leopoldshafen (Germany)

    2015-03-15

    The processes involved in tritium transfer in crops are complex and regulated by many feedback mechanisms. A full mechanistic model is difficult to develop due to the complexity of the processes involved in tritium transfer and environmental conditions. First, a review of existing models (ORYZA2000, CROPTRIT and WOFOST) presenting their features and limits, is made. Secondly, the preparatory steps for a robust model are discussed, considering the role of dry matter and photosynthesis contribution to the OBT (Organically Bound Tritium) dynamics in crops.

  2. Modeling single-file diffusion with step fractional Brownian motion and a generalized fractional Langevin equation

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    Single-file diffusion behaves as normal diffusion at small time and as subdiffusion at large time. These properties can be described in terms of fractional Brownian motion with variable Hurst exponent or multifractional Brownian motion. We introduce a new stochastic process called Riemann–Liouville step fractional Brownian motion which can be regarded as a special case of multifractional Brownian motion with a step function type of Hurst exponent tailored for single-file diffusion. Such a step fractional Brownian motion can be obtained as a solution of the fractional Langevin equation with zero damping. Various kinds of fractional Langevin equations and their generalizations are then considered in order to decide whether their solutions provide the correct description of the long and short time behaviors of single-file diffusion. The cases where the dissipative memory kernel is a Dirac delta function, a power-law function and a combination of these functions are studied in detail. In addition to the case where the short time behavior of single-file diffusion behaves as normal diffusion, we also consider the possibility of a process that begins as ballistic motion

  3. How to Use the Actor-Partner Interdependence Model (APIM To Estimate Different Dyadic Patterns in MPLUS: A Step-by-Step Tutorial

    Directory of Open Access Journals (Sweden)

    Fitzpatrick, Josée

    2016-01-01

    Full Text Available Dyadic data analysis with distinguishable dyads assesses the variance, not only between dyads, but also within the dyad when members are distinguishable on a known variable. In past research, the Actor-Partner Interdependence Model (APIM has been the statistical model of choice in order to take into account this interdependence. Although this method has received considerable interest in the past decade, to our knowledge, no specific guide or tutorial exists to describe how to test an APIM model. In order to close this gap, this article will provide researchers with a step-by-step tutorial for assessing the most recent advancements of the APIM with the use of structural equation modeling (SEM. The present tutorial will also utilize the statistical program MPLUS.

  4. Long memory of financial time series and hidden Markov models with time-varying parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    Hidden Markov models are often used to capture stylized facts of daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior for the ability to reproduce the stylized...... facts have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time-varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared...... daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step predictions....

  5. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  6. Time-stepped & discrete-event simulations of electromagnetic propulsion systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The existing plasma codes are ill suited for modeling of mixed resolution problems, such as the plasma sail, where the system under study comprises subsystems with...

  7. Development of a three dimensional circulation model based on fractional step method

    Directory of Open Access Journals (Sweden)

    Mazen Abualtayef

    2010-03-01

    Full Text Available A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic and thermodynamic results were predicted. The numerically predicted amplitudes and phase angles were well consistent with the field observations.

  8. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    Science.gov (United States)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  9. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment

    OpenAIRE

    Bunce, D; Haynes, BI; Lord, SR; Gschwind, YJ; Kochan, NA; Reppermund, S; Brodaty, H; Sachdev, PS; Delbaere, K

    2017-01-01

    Background: Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI)...

  10. Modelling nematode movement using time-fractional dynamics.

    Science.gov (United States)

    Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M

    2007-09-07

    We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.

  11. Peak-load pricing in two-step-modeling of power generation and transmission

    International Nuclear Information System (INIS)

    Korunig, Jens-Holger

    2005-01-01

    For the use of electric current transmission and distribution networks, which represent a monopolistic bottleneck, are inevitable. In the context of the liberalization of the current markets vertical separation is possible to create a competitive generator market and separated current transmission and distribution networks, which can made be available to the same conditions to all potential net users. A private, independent network carrier, however, would arrange its net from the interest of profit: He would dimension the net probably smaller and would try to orientate the transmission prices to the actual costs. From the point of view of the system operator as well as from the economic point of view time-dependent transmission prices are preferable at the sight of periodically changing demand. The question, which arises thereby, is, to what extent the property structure affects prices and quantities for the current transmission and which property structure under economical aspects (maximization of the welfare, ...) represents the best. It is to be expected, that with a higher degree of competition a higher efficiency and a larger social welfare are obtained. This is to be analysed in a two-stage model with period-dependent demand and modelling both the power generation and the transmission sector regarding different property structures. Different types of market are assumed both on the production and on the distribution stage. Those market results are examined on their welfare effects. This is to accompany with a realistic modelling above all the network area, which represents further (in contrast to the power generation) a natural monopoly: A natural monopoly has compellingly sub additive cost functions. The net sector (and in a further step also the power generation) is to be modelled with decreasing average and marginal costs (current research task). If one permits each behaviour, an enterprise, which possesses this natural monopoly, will extract monopolist

  12. A stepped leader model for lightning including charge distribution in branched channels

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Wei; Zhang, Li [School of Electrical Engineering, Shandong University, Jinan 250061 (China); Li, Qingmin, E-mail: lqmeee@ncepu.edu.cn [Beijing Key Lab of HV and EMC, North China Electric Power University, Beijing 102206 (China); State Key Lab of Alternate Electrical Power System with Renewable Energy Sources, Beijing 102206 (China)

    2014-09-14

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  13. A stepped leader model for lightning including charge distribution in branched channels

    International Nuclear Information System (INIS)

    Shi, Wei; Zhang, Li; Li, Qingmin

    2014-01-01

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  14. From representing to modelling knowledge: Proposing a two-step training for excellence in concept mapping

    Directory of Open Access Journals (Sweden)

    Joana G. Aguiar

    2017-09-01

    Full Text Available Training users in the concept mapping technique is critical for ensuring a high-quality concept map in terms of graphical structure and content accuracy. However, assessing excellence in concept mapping through structural and content features is a complex task. This paper proposes a two-step sequential training in concept mapping. The first step requires the fulfilment of low-order cognitive objectives (remember, understand and apply to facilitate novices’ development into good Cmappers by honing their knowledge representation skills. The second step requires the fulfilment of high-order cognitive objectives (analyse, evaluate and create to grow good Cmappers into excellent ones through the development of knowledge modelling skills. Based on Bloom’s revised taxonomy and cognitive load theory, this paper presents theoretical accounts to (1 identify the criteria distinguishing good and excellent concept maps, (2 inform instructional tasks for concept map elaboration and (3 propose a prototype for training users on concept mapping combining online and face-to-face activities. The proposed training application and the institutional certification are the next steps for the mature use of concept maps for educational as well as business purposes.

  15. Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Tabares-Velasco, Paulo Cesar

    2016-09-01

    Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.

  16. Seven Steps to Heaven: Time and Tide in 21st Century Contemporary Music Higher Education

    Science.gov (United States)

    Mitchell, Annie K.

    2018-01-01

    Throughout the time of my teaching career, the tide has exposed changes in the nature of music, students and music education. This paper discusses teaching and learning in contemporary music at seven critical stages of 21st century music education: i) diverse types of undergraduate learners; ii) teaching traditional classical repertoire and skills…

  17. Time stepping free numerical solution of linear differential equations: Krylov subspace versus waveform relaxation

    NARCIS (Netherlands)

    Bochev, Mikhail A.; Oseledets, I.V.; Tyrtyshnikov, E.E.

    2013-01-01

    The aim of this paper is two-fold. First, we propose an efficient implementation of the continuous time waveform relaxation method based on block Krylov subspaces. Second, we compare this new implementation against Krylov subspace methods combined with the shift and invert technique.

  18. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  19. Simultaneous bilateral stereotactic procedure for deep brain stimulation implants: a significant step for reducing operation time.

    Science.gov (United States)

    Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras

    2016-07-01

    OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional

  20. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    Science.gov (United States)

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  1. Models for microtubule cargo transport coupling the Langevin equation to stochastic stepping motor dynamics: Caring about fluctuations.

    Science.gov (United States)

    Bouzat, Sebastián

    2016-01-01

    One-dimensional models coupling a Langevin equation for the cargo position to stochastic stepping dynamics for the motors constitute a relevant framework for analyzing multiple-motor microtubule transport. In this work we explore the consistence of these models focusing on the effects of the thermal noise. We study how to define consistent stepping and detachment rates for the motors as functions of the local forces acting on them in such a way that the cargo velocity and run-time match previously specified functions of the external load, which are set on the base of experimental results. We show that due to the influence of the thermal fluctuations this is not a trivial problem, even for the single-motor case. As a solution, we propose a motor stepping dynamics which considers memory on the motor force. This model leads to better results for single-motor transport than the approaches previously considered in the literature. Moreover, it gives a much better prediction for the stall force of the two-motor case, highly compatible with the experimental findings. We also analyze the fast fluctuations of the cargo position and the influence of the viscosity, comparing the proposed model to the standard one, and we show how the differences on the single-motor dynamics propagate to the multiple motor situations. Finally, we find that the one-dimensional character of the models impede an appropriate description of the fast fluctuations of the cargo position at small loads. We show how this problem can be solved by considering two-dimensional models.

  2. The impact of weight classification on safety: timing steps to adapt to external constraints

    Science.gov (United States)

    Gill, S.V.

    2015-01-01

    Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all psmetronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658

  3. Off-line real-time FTIR analysis of a process step in imipenem production

    Science.gov (United States)

    Boaz, Jhansi R.; Thomas, Scott M.; Meyerhoffer, Steven M.; Staskiewicz, Steven J.; Lynch, Joseph E.; Egan, Richard S.; Ellison, Dean K.

    1992-08-01

    We have developed an FT-IR method, using a Spectra-Tech Monit-IR 400 systems, to monitor off-line the completion of a reaction in real-time. The reaction is moisture-sensitive and analysis by more conventional methods (normal-phase HPLC) is difficult to reproduce. The FT-IR method is based on the shift of a diazo band when a conjugated beta-diketone is transformed into a silyl enol ether during the reaction. The reaction mixture is examined directly by IR and does not require sample workup. Data acquisition time is less than one minute. The method has been validated for specificity, precision and accuracy. The results obtained by the FT-IR method for known mixtures and in-process samples compare favorably with those from a normal-phase HPLC method.

  4. Comparison of Different Turbulence Models for Numerical Simulation of Pressure Distribution in V-Shaped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Zhaoliang Bai

    2017-01-01

    Full Text Available V-shaped stepped spillway is a new shaped stepped spillway, and the pressure distribution is quite different from that of the traditional stepped spillway. In this paper, five turbulence models were used to simulate the pressure distribution in the skimming flow regimes. Through comparing with the physical value, the realizable k-ε model had better precision in simulating the pressure distribution. Then, the flow pattern of V-shaped and traditional stepped spillways was given to illustrate the unique pressure distribution using realizable k-ε turbulence model.

  5. First Steps Towards AN Integrated Citygml-Based 3d Model of Vienna

    Science.gov (United States)

    Agugiaro, G.

    2016-06-01

    This paper presents and discusses the results regarding the initial steps (selection, analysis, preparation and eventual integration of a number of datasets) for the creation of an integrated, semantic, three-dimensional, and CityGML-based virtual model of the city of Vienna. CityGML is an international standard conceived specifically as information and data model for semantic city models at urban and territorial scale. It is being adopted by more and more cities all over the world. The work described in this paper is embedded within the European Marie-Curie ITN project "Ci-nergy, Smart cities with sustainable energy systems", which aims, among the rest, at developing urban decision making and operational optimisation software tools to minimise non-renewable energy use in cities. Given the scope and scale of the project, it is therefore vital to set up a common, unique and spatio-semantically coherent urban model to be used as information hub for all applications being developed. This paper reports about the experiences done so far, it describes the test area and the available data sources, it shows and exemplifies the data integration issues, the strategies developed to solve them in order to obtain the integrated 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.

  6. Effect of moisture and drying time on the bond strength of the one-step self-etching adhesive system

    Directory of Open Access Journals (Sweden)

    Yoon Lee

    2012-08-01

    Full Text Available Objectives To investigate the effect of dentin moisture degree and air-drying time on dentin-bond strength of two different one-step self-etching adhesive systems. Materials and Methods Twenty-four human third molars were used for microtensile bond strength testing of G-Bond and Clearfil S3 Bond. The dentin surface was either blot-dried or air-dried before applying these adhesive agents. After application of the adhesive agent, three different air drying times were evaluated: 1, 5, and 10 sec. Composite resin was build up to 4 mm thickness and light cured for 40 sec with 2 separate layers. Then the tooth was sectioned and trimmed to measure the microtensile bond strength using a universal testing machine. The measured bond strengths were analyzed with three-way ANOVA and regression analysis was done (p = 0.05. Results All three factors, materials, dentin wetness and air drying time, showed significant effect on the microtensile bond strength. Clearfil S3 Bond, dry dentin surface and 10 sec air drying time showed higher bond strength. Conclusions Within the limitation of this experiment, air drying time after the application of the one-step self-etching adhesive agent was the most significant factor affecting the bond strength, followed by the material difference and dentin moisture before applying the adhesive agent.

  7. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  8. Model Checking Real-Time Systems

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  9. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    Science.gov (United States)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  10. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    International Nuclear Information System (INIS)

    Finn, John M.

    2015-01-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012

  11. First steps towards real-time radiography at the NECTAR facility

    Science.gov (United States)

    Bücherl, T.; Wagner, F. M.; v. Gostomski, Ch. Lierse

    2009-06-01

    The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm -2 s -1 (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.

  12. First steps towards real-time radiography at the NECTAR facility

    International Nuclear Information System (INIS)

    Buecherl, T.; Wagner, F.M.; Lierse von Gostomski, Ch.

    2009-01-01

    The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm -2 s -1 (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.

  13. First steps towards real-time radiography at the NECTAR facility

    Energy Technology Data Exchange (ETDEWEB)

    Buecherl, T. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)], E-mail: thomas.buecherl@radiochemie.de; Wagner, F.M. [Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II), Technische Universitaet Muenchen (Germany); Lierse von Gostomski, Ch. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)

    2009-06-21

    The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm{sup -2} s{sup -1} (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.

  14. Thermal modeling of step-out targets at the Soda Lake geothermal field, Churchill County, Nevada

    Science.gov (United States)

    Dingwall, Ryan Kenneth

    Temperature data at the Soda Lake geothermal field in the southeastern Carson Sink, Nevada, highlight an intense thermal anomaly. The geothermal field produces roughly 11 MWe from two power producing facilities which are rated to 23 MWe. The low output is attributed to the inability to locate and produce sufficient volumes of fluid at adequate temperature. Additionally, the current producing area has experienced declining production temperatures over its 40 year history. Two step-out targets adjacent to the main field have been identified that have the potential to increase production and extend the life of the field. Though shallow temperatures in the two subsidiary areas are significantly less than those found within the main anomaly, measurements in deeper wells (>1,000 m) show that temperatures viable for utilization are present. High-pass filtering of the available complete Bouguer gravity data indicates that geothermal flow is present within the shallow sediments of the two subsidiary areas. Significant faulting is observed in the seismic data in both of the subsidiary areas. These structures are highlighted in the seismic similarity attribute calculated as part of this study. One possible conceptual model for the geothermal system(s) at the step-out targets indicated upflow along these faults from depth. In order to test this hypothesis, three-dimensional computer models were constructed in order to observe the temperatures that would result from geothermal flow along the observed fault planes. Results indicate that the observed faults are viable hosts for the geothermal system(s) in the step-out areas. Subsequently, these faults are proposed as targets for future exploration focus and step-out drilling.

  15. Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model

    Science.gov (United States)

    Liu, Q. B.; Wang, Q. J.; Lei, M. F.

    2015-09-01

    It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.

  16. Time lags in biological models

    CERN Document Server

    MacDonald, Norman

    1978-01-01

    In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...

  17. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic, and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. In this paper, we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neuronic-related time-step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  18. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. Here we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neutronics-related time step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  19. Effect of increased exposure times on amount of residual monomer released from single-step self-etch adhesives.

    Science.gov (United States)

    Altunsoy, Mustafa; Botsali, Murat Selim; Tosun, Gonca; Yasar, Ahmet

    2015-10-16

    The aim of this study was to evaluate the effect of increased exposure times on the amount of residual Bis-GMA, TEGDMA, HEMA and UDMA released from single-step self-etch adhesive systems. Two adhesive systems were used. The adhesives were applied to bovine dentin surface according to the manufacturer's instructions and were polymerized using an LED curing unit for 10, 20 and 40 seconds (n = 5). After polymerization, the specimens were stored in 75% ethanol-water solution (6 mL). Residual monomers (Bis-GMA, TEGDMA, UDMA and HEMA) that were eluted from the adhesives (after 10 minutes, 1 hour, 1 day, 7 days and 30 days) were analyzed by high-performance liquid chromatography (HPLC). The data were analyzed using 1-way analysis of variance and Tukey HSD tests. Among the time periods, the highest amount of released residual monomers from adhesives was observed in the 10th minute. There were statistically significant differences regarding released Bis-GMA, UDMA, HEMA and TEGDMA between the adhesive systems (p<0.05). There were no significant differences among the 10, 20 and 40 second polymerization times according to their effect on residual monomer release from adhesives (p>0.05). Increasing the polymerization time did not have an effect on residual monomer release from single-step self-etch adhesives.

  20. Modeling nonstationarity in space and time.

    Science.gov (United States)

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  1. Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction

    Science.gov (United States)

    Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.

    1996-11-01

    Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?

  2. In the time of significant generational diversity - surgical leadership must step up!

    Science.gov (United States)

    Money, Samuel R; O'Donnell, Mark E; Gray, Richard J

    2014-02-01

    The diverse attitudes and motivations of surgeons and surgical trainees within different age groups present an important challenge for surgical leaders and educators. These challenges to surgical leadership are not unique, and other industries have likewise needed to grapple with how best to manage these various age groups. The authors will herein explore management and leadership for surgeons in a time of age diversity, define generational variations within "Baby-Boomer", "Generation X" and "Generation Y" populations, and identify work ethos concepts amongst these three groups. The surgical community must understand and embrace these concepts in order to continue to attract a stellar pool of applicants from medical school. By not accepting the changing attitudes and motivations of young trainees and medical students, we may disenfranchise a high percentage of potential future surgeons. Surgical training programs will fill, but will they contain the highest quality trainees? Copyright © 2013 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  3. A stepped-care model of post-disaster child and adolescent mental health service provision

    Directory of Open Access Journals (Sweden)

    Brett M. McDermott

    2014-07-01

    Full Text Available Background: From a global perspective, natural disasters are common events. Published research highlights that a significant minority of exposed children and adolescents develop disaster-related mental health syndromes and associated functional impairment. Consistent with the considerable unmet need of children and adolescents with regard to psychopathology, there is strong evidence that many children and adolescents with post-disaster mental health presentations are not receiving adequate interventions. Objective: To critique existing child and adolescent mental health services (CAMHS models of care and the capacity of such models to deal with any post-disaster surge in clinical demand. Further, to detail an innovative service response; a child and adolescent stepped-care service provision model. Method: A narrative review of traditional CAMHS is presented. Important elements of a disaster response – individual versus community recovery, public health approaches, capacity for promotion and prevention and service reach are discussed and compared with the CAMHS approach. Results: Difficulties with traditional models of care are highlighted across all levels of intervention; from the ability to provide preventative initiatives to the capacity to provide intense specialised posttraumatic stress disorder interventions. In response, our over-arching stepped-care model is advocated. The general response is discussed and details of the three tiers of the model are provided: Tier 1 communication strategy, Tier 2 parent effectiveness and teacher training, and Tier 3 screening linked to trauma-focused cognitive behavioural therapy. Conclusion: In this paper, we argue that traditional CAMHS are not an appropriate model of care to meet the clinical needs of this group in the post-disaster setting. We conclude with suggestions how improved post-disaster child and adolescent mental health outcomes can be achieved by applying an innovative service approach.

  4. A stepped-care model of post-disaster child and adolescent mental health service provision.

    Science.gov (United States)

    McDermott, Brett M; Cobham, Vanessa E

    2014-01-01

    From a global perspective, natural disasters are common events. Published research highlights that a significant minority of exposed children and adolescents develop disaster-related mental health syndromes and associated functional impairment. Consistent with the considerable unmet need of children and adolescents with regard to psychopathology, there is strong evidence that many children and adolescents with post-disaster mental health presentations are not receiving adequate interventions. To critique existing child and adolescent mental health services (CAMHS) models of care and the capacity of such models to deal with any post-disaster surge in clinical demand. Further, to detail an innovative service response; a child and adolescent stepped-care service provision model. A narrative review of traditional CAMHS is presented. Important elements of a disaster response - individual versus community recovery, public health approaches, capacity for promotion and prevention and service reach are discussed and compared with the CAMHS approach. Difficulties with traditional models of care are highlighted across all levels of intervention; from the ability to provide preventative initiatives to the capacity to provide intense specialised posttraumatic stress disorder interventions. In response, our over-arching stepped-care model is advocated. The general response is discussed and details of the three tiers of the model are provided: Tier 1 communication strategy, Tier 2 parent effectiveness and teacher training, and Tier 3 screening linked to trauma-focused cognitive behavioural therapy. In this paper, we argue that traditional CAMHS are not an appropriate model of care to meet the clinical needs of this group in the post-disaster setting. We conclude with suggestions how improved post-disaster child and adolescent mental health outcomes can be achieved by applying an innovative service approach.

  5. Unexpected perturbations training improves balance control and voluntary stepping times in older adults - a double blind randomized control trial.

    Science.gov (United States)

    Kurz, Ilan; Gimmon, Yoav; Shapiro, Amir; Debi, Ronen; Snir, Yoram; Melzer, Itshak

    2016-03-04

    Falls are common among elderly, most of them occur while slipping or tripping during walking. We aimed to explore whether a training program that incorporates unexpected loss of balance during walking able to improve risk factors for falls. In a double-blind randomized controlled trial 53 community dwelling older adults (age 80.1±5.6 years), were recruited and randomly allocated to an intervention group (n = 27) or a control group (n = 26). The intervention group received 24 training sessions over 3 months that included unexpected perturbation of balance exercises during treadmill walking. The control group performed treadmill walking with no perturbations. The primary outcome measures were the voluntary step execution times, traditional postural sway parameters and Stabilogram-Diffusion Analysis. The secondary outcome measures were the fall efficacy Scale (FES), self-reported late life function (LLFDI), and Performance-Oriented Mobility Assessment (POMA). Compared to control, participation in intervention program that includes unexpected loss of balance during walking led to faster Voluntary Step Execution Times under single (p = 0.002; effect size [ES] =0.75) and dual task (p = 0.003; [ES] = 0.89) conditions; intervention group subjects showed improvement in Short-term Effective diffusion coefficients in the mediolateral direction of the Stabilogram-Diffusion Analysis under eyes closed conditions (p = 0.012, [ES] = 0.92). Compared to control there were no significant changes in FES, LLFDI, and POMA. An intervention program that includes unexpected loss of balance during walking can improve voluntary stepping times and balance control, both previously reported as risk factors for falls. This however, did not transferred to a change self-reported function and FES. ClinicalTrials.gov NCT01439451 .

  6. Electrochemical model of polyaniline-based memristor with mass transfer step

    International Nuclear Information System (INIS)

    Demin, V.A.; Erokhin, V.V.; Kashkarov, P.K.; Kovalchuk, M.V.

    2015-01-01

    The electrochemical organic memristor with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, such as the new field-programmable gate arrays or the neuromorphic networks capable for learning. In this work a new theoretical model of the polyaniline memristor is presented. The developed model of organic memristor functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device including the mass transfer step of ionic reactants. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment, but also quantitative similarities of the resultant current values. This model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices

  7. A simple one-step chemistry model for partially premixed hydrocarbon combustion

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Tarrazo, Eduardo [Instituto Nacional de Tecnica Aeroespacial, Madrid (Spain); Sanchez, Antonio L. [Area de Mecanica de Fluidos, Universidad Carlos III de Madrid, Leganes 28911 (Spain); Linan, Amable [ETSI Aeronauticos, Pl. Cardenal Cisneros 3, Madrid 28040 (Spain); Williams, Forman A. [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093-0411 (United States)

    2006-10-15

    This work explores the applicability of one-step irreversible Arrhenius kinetics with unity reaction order to the numerical description of partially premixed hydrocarbon combustion. Computations of planar premixed flames are used in the selection of the three model parameters: the heat of reaction q, the activation temperature T{sub a}, and the preexponential factor B. It is seen that changes in q with equivalence ratio f need to be introduced in fuel-rich combustion to describe the effect of partial fuel oxidation on the amount of heat released, leading to a universal linear variation q(f) for f>1 for all hydrocarbons. The model also employs a variable activation temperature T{sub a}(f) to mimic changes in the underlying chemistry in rich and very lean flames. The resulting chemistry description is able to reproduce propagation velocities of diluted and undiluted flames accurately over the whole flammability limit. Furthermore, computations of methane-air counterflow diffusion flames are used to test the proposed chemistry under nonpremixed conditions. The model not only predicts the critical strain rate at extinction accurately but also gives near-extinction flames with oxygen leakage, thereby overcoming known predictive limitations of one-step Arrhenius kinetics. (author)

  8. Detection of Listeria monocytogenes in ready-to-eat food by Step One real-time polymerase chain reaction.

    Science.gov (United States)

    Pochop, Jaroslav; Kačániová, Miroslava; Hleba, Lukáš; Lopasovský, L'ubomír; Bobková, Alica; Zeleňáková, Lucia; Stričík, Michal

    2012-01-01

    The aim of this study was to follow contamination of ready-to-eat food with Listeria monocytogenes by using the Step One real time polymerase chain reaction (PCR). We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and MicroSEQ® Listeria monocytogenes Detection Kit for the real-time PCR performance. In 30 samples of ready-to-eat milk and meat products without incubation we detected strains of Listeria monocytogenes in five samples (swabs). Internal positive control (IPC) was positive in all samples. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in ready-to-eat food without incubation.

  9. Real time wave forecasting using wind time history and numerical model

    Science.gov (United States)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  10. Bootstrapping a time series model

    International Nuclear Information System (INIS)

    Son, M.S.

    1984-01-01

    The bootstrap is a methodology for estimating standard errors. The idea is to use a Monte Carlo simulation experiment based on a nonparametric estimate of the error distribution. The main objective of this dissertation was to demonstrate the use of the bootstrap to attach standard errors to coefficient estimates and multi-period forecasts in a second-order autoregressive model fitted by least squares and maximum likelihood estimation. A secondary objective of this article was to present the bootstrap in the context of two econometric equations describing the unemployment rate and individual income tax in the state of Oklahoma. As it turns out, the conventional asymptotic formulae (both the least squares and maximum likelihood estimates) for estimating standard errors appear to overestimate the true standard errors. But there are two problems: 1) the first two observations y 1 and y 2 have been fixed, and 2) the residuals have not been inflated. After these two factors are considered in the trial and bootstrap experiment, both the conventional maximum likelihood and bootstrap estimates of the standard errors appear to be performing quite well. At present, there does not seem to be a good rule of thumb for deciding when the conventional asymptotic formulae will give acceptable results

  11. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  12. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  13. Ozonisation of model compounds as a pretreatment step for the biological wastewater treatment

    International Nuclear Information System (INIS)

    Degen, U.

    1979-11-01

    Biological degradability and toxicity of organic substances are two basic criteria determining their behaviour in natural environment and during the biological treatment of waste waters. In this work oxidation products of model compounds (p-toluenesulfonic acid, benzenesulfonic acid and aniline) generated by ozonation were tested in a two step laboratory plant with activated sludge. The organic oxidation products and the initial compounds were the sole source of carbon for the microbes of the adapted activated sludge. The progress of elimination of the compounds was studied by measuring DOC, COD, UV-spectra of the initial compounds and sulfate. Initial concentrations of the model compounds were 2-4 mmole/1 with 25-75ion of sulfonic acids. As oxidation products of p-toluenesulfonic acid the following compounds were identified and quantitatively measured: methylglyoxal, pyruvic acid, oxalic acid, acetic acid, formic acid and sulfate. With all the various solutions with different concentrations of initial compounds and oxidation products the biological activity in the two step laboratory plant could maintain. p-Toluenesulfonic acid and the oxidation products are biologically degraded. The degradation of p-toluenesulfonic acid is measured by following the increasing of the sulfate concentration after biological treatment. This shows that the elimination of p-toluenesulfonic acid is not an adsorption but a mineralization step. At high p-toluenesulfonic acid concentration and low concentration of oxidation products p-toluenesulfonic acid is eliminated with a high efficiency (4.3 mole/d m 3 = 0.34 kg p-toluenesulfonic acid/d m 3 ). However at high concentration of oxidation products p-toluenesulfonic acid is less degraded. The oxidation products are always degraded with an elimination efficiency of 70%. A high load of biologically degradable oxidation products diminished the elimination efficiency of p-toluenesulfonic acid. (orig.) [de

  14. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1

  15. Model of cap-dependent translation initiation in sea urchin: a step towards the eukaryotic translation regulation network.

    Science.gov (United States)

    Bellé, Robert; Prigent, Sylvain; Siegel, Anne; Cormier, Patrick

    2010-03-01

    The large and rapid increase in the rate of protein synthesis following fertilization of the sea urchin egg has long been a paradigm of translational control, an important component of the regulation of gene expression in cells. This translational up-regulation is linked to physiological changes that occur upon fertilization and is necessary for entry into first cell division cycle. Accumulated knowledge on cap-dependent initiation of translation makes it suited and timely to start integrating the data into a system view of biological functions. Using a programming environment for system biology coupled with model validation (named Biocham), we have built an integrative model for cap-dependent initiation of translation. The model is described by abstract rules. It contains 51 reactions involved in 74 molecular complexes. The model proved to be coherent with existing knowledge by using queries based on computational tree logic (CTL) as well as Boolean simulations. The model could simulate the change in translation occurring at fertilization in the sea urchin model. It could also be coupled with an existing model designed for cell-cycle control. Therefore, the cap-dependent translation initiation model can be considered a first step towards the eukaryotic translation regulation network.

  16. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mather, Barry

    2017-08-24

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.

  17. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    Science.gov (United States)

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  18. Modeling of Step-up Grid-Connected Photovoltaic Systems for Control Purposes

    Directory of Open Access Journals (Sweden)

    Daniel Gonzalez

    2012-06-01

    Full Text Available This paper presents modeling approaches for step-up grid-connected photovoltaic systems intended to provide analytical tools for control design. The first approach is based on a voltage source representation of the bulk capacitor interacting with the grid-connected inverter, which is a common model for large DC buses and closed-loop inverters. The second approach considers the inverter of a double-stage PV system as a Norton equivalent, which is widely accepted for open-loop inverters. In addition, the paper considers both ideal and realistic models for the DC/DC converter that interacts with the PV module, providing four mathematical models to cover a wide range of applications. The models are expressed in state space representation to simplify its use in analysis and control design, and also to be easily implemented in simulation software, e.g., Matlab. The PV system was analyzed to demonstrate the non-minimum phase condition for all the models, which is an important aspect to select the control technique. Moreover, the system observability and controllability were studied to define design criteria. Finally, the analytical results are illustrated by means of detailed simulations, and the paper results are validated in an experimental test bench.

  19. Formal Modeling and Analysis of Timed Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts of ...... systems, discrete time systems, timed languages, and real-time operating systems....... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real-time......This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...

  20. A stochastic step flow model with growth in 1+1 dimensions

    International Nuclear Information System (INIS)

    Margetis, Dionisios

    2010-01-01

    Mathematical implications of adding Gaussian white noise to the Burton-Cabrera-Frank model for N terraces ('gaps') on a crystal surface are studied under external material deposition for large N. The terraces separate straight, non-interacting line defects (steps) with uniform spacing initially (t = 0). As the growth tends to vanish, the gaps become uncorrelated. First, simple closed-form expressions for the gap variance are obtained directly for small fluctuations. The leading-order, linear stochastic differential equations are prototypical for discrete asymmetric processes. Second, the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy for joint gap densities is formulated. Third, a self-consistent 'mean field' is defined via the BBGKY hierarchy. This field is then determined approximately through a terrace decorrelation hypothesis. Fourth, comparisons are made of directly obtained and mean-field results. Limitations and issues in the modeling of noise are outlined.

  1. Effect of different air-drying time on the microleakage of single-step self-etch adhesives

    Directory of Open Access Journals (Sweden)

    Horieh Moosavi

    2013-05-01

    Full Text Available Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO, Clearfil S3 Bond (CSB, Bond Force (BF. Each main group divided into three subgroups regarding the air-drying time: without application of air stream, following the manufacturer's instruction, for 10 sec more than manufacturer's instruction. After completion of restorations, specimens were thermocycled and then connected to a fluid filtration system to evaluate microleakage. The data were statistically analyzed using two-way ANOVA and Tukey-test (α = 0.05. Results The microleakage of all adhesives decreased when the air-drying time increased from 0 sec to manufacturer's instruction (p < 0.001. The microleakage of BF reached its lowest values after increasing the drying time to 10 sec more than the manufacturer's instruction (p < 0.001. Microleakage of OBAO and CSB was significantly lower compared to BF in all three drying time (p < 0.001. Conclusions Increasing in air-drying time of adhesive layer in one-step self-etch adhesives caused reduction of microleakage, but the amount of this reduction may be dependent on the adhesive components of self-etch adhesives.

  2. Biomechanical influences on balance recovery by stepping.

    Science.gov (United States)

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  3. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  4. Small Steps: Preliminary effectiveness and feasibility of an incremental goal-setting intervention to reduce sitting time in older adults.

    Science.gov (United States)

    Lewis, L K; Rowlands, A V; Gardiner, P A; Standage, M; English, C; Olds, T

    2016-03-01

    This study aimed to evaluate the preliminary effectiveness and feasibility of a theory-informed program to reduce sitting time in older adults. Pre-experimental (pre-post) study. Thirty non-working adult (≥ 60 years) participants attended a one hour face-to-face intervention session and were guided through: a review of their sitting time; normative feedback on sitting time; and setting goals to reduce total sitting time and bouts of prolonged sitting. Participants chose six goals and integrated one per week incrementally for six weeks. Participants received weekly phone calls. Sitting time and bouts of prolonged sitting (≥ 30 min) were measured objectively for seven days (activPAL3c inclinometer) pre- and post-intervention. During these periods, a 24-h time recall instrument was administered by computer-assisted telephone interview. Participants completed a post-intervention project evaluation questionnaire. Paired t tests with sequential Bonferroni corrections and Cohen's d effect sizes were calculated for all outcomes. Twenty-seven participants completed the assessments (71.7 ± 6.5 years). Post-intervention, objectively-measured total sitting time was significantly reduced by 51.5 min per day (p=0.006; d=-0.58) and number of bouts of prolonged sitting by 0.8 per day (p=0.002; d=-0.70). Objectively-measured standing increased by 39 min per day (p=0.006; d=0.58). Participants self-reported spending 96 min less per day sitting (p<0.001; d=-0.77) and 32 min less per day watching television (p=0.005; d=-0.59). Participants were highly satisfied with the program. The 'Small Steps' program is a feasible and promising avenue for behavioral modification to reduce sitting time in older adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Modeling of X-ray Images and Energy Spectra Produced by Stepping Lightning Leaders

    Science.gov (United States)

    Xu, Wei; Marshall, Robert A.; Celestin, Sebastien; Pasko, Victor P.

    2017-11-01

    Recent ground-based measurements at the International Center for Lightning Research and Testing (ICLRT) have greatly improved our knowledge of the energetics, fluence, and evolution of X-ray emissions during natural cloud-to-ground (CG) and rocket-triggered lightning flashes. In this paper, using Monte Carlo simulations and the response matrix of unshielded detectors in the Thunderstorm Energetic Radiation Array (TERA), we calculate the energy spectra of X-rays as would be detected by TERA and directly compare with the observational data during event MSE 10-01. The good agreement obtained between TERA measurements and theoretical calculations supports the mechanism of X-ray production by thermal runaway electrons during the negative corona flash stage of stepping lightning leaders. Modeling results also suggest that measurements of X-ray bursts can be used to estimate the approximate range of potential drop of lightning leaders. Moreover, the X-ray images produced during the leader stepping process in natural negative CG discharges, including both the evolution and morphological features, are theoretically quantified. We show that the compact emission pattern as recently observed in X-ray images is likely produced by X-rays originating from the source region, and the diffuse emission pattern can be explained by the Compton scattering effects.

  6. Finite element time domain modeling of controlled-Source electromagnetic data with a hybrid boundary condition

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin

    2017-01-01

    method which is unconditionally stable. We solve the diffusion equation for the electric field with a total field formulation. The finite element system of equation is solved using the direct method. The solutions of electric field, at different time, can be obtained using the effective time stepping...... method with trivial computation cost once the matrix is factorized. We try to keep the same time step size for a fixed number of steps using an adaptive time step doubling (ATSD) method. The finite element modeling domain is also truncated using a semi-adaptive method. We proposed a new boundary...... condition based on approximating the total field on the modeling boundary using the primary field corresponding to a layered background model. We validate our algorithm using several synthetic model studies....

  7. Stepwise hydrogeological modeling and groundwater flow analysis on site scale (step 2)

    International Nuclear Information System (INIS)

    Onoe, Hironori; Saegusa, Hiromitsu; Endo, Yoshinobu

    2005-02-01

    One of the main goals of the Mizunami Underground Research Laboratory Project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. To achieve this goal, a variety of investigations are being conducted using an iterative approach. In this study, hydrogeological modeling and groundwater flow analyses have been carried out using the data from surface-based investigations at Step 2, in order to synthesize the investigation results, to evaluate the uncertainty of the hydrogeological model, and to specify items for further investigation. The results of this study are summarized as follows: 1) The understanding of groundwater flow is enhanced, and the hydrogeological model has renewed; 2) The importance of faults as major groundwater flow pathways has been demonstrated; 3) The importance of iterative approach as progress of investigations has been demonstrated; 4) Geological and hydraulic characteristics of faults with orientation of NNW, NW and NE were shown to be especially significant; 5) the hydraulic properties of the Lower Sparsely Fractured Domain (LSFD) significantly influence the groundwater flow. The main items specified for further investigations are summarized as follows: 1) Geological and hydraulic characteristics of NNW, NW and NE trending faults; 2) Hydraulic properties of the LSFD; 3) More accuracy upper and lateral boundary conditions of the site scale model. (author)

  8. Discounting Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....

  9. Long-Time Plasma Membrane Imaging Based on a Two-Step Synergistic Cell Surface Modification Strategy.

    Science.gov (United States)

    Jia, Hao-Ran; Wang, Hong-Yin; Yu, Zhi-Wu; Chen, Zhan; Wu, Fu-Gen

    2016-03-16

    Long-time stable plasma membrane imaging is difficult due to the fast cellular internalization of fluorescent dyes and the quick detachment of the dyes from the membrane. In this study, we developed a two-step synergistic cell surface modification and labeling strategy to realize long-time plasma membrane imaging. Initially, a multisite plasma membrane anchoring reagent, glycol chitosan-10% PEG2000 cholesterol-10% biotin (abbreviated as "GC-Chol-Biotin"), was incubated with cells to modify the plasma membranes with biotin groups with the assistance of the membrane anchoring ability of cholesterol moieties. Fluorescein isothiocyanate (FITC)-conjugated avidin was then introduced to achieve the fluorescence-labeled plasma membranes based on the supramolecular recognition between biotin and avidin. This strategy achieved stable plasma membrane imaging for up to 8 h without substantial internalization of the dyes, and avoided the quick fluorescence loss caused by the detachment of dyes from plasma membranes. We have also demonstrated that the imaging performance of our staining strategy far surpassed that of current commercial plasma membrane imaging reagents such as DiD and CellMask. Furthermore, the photodynamic damage of plasma membranes caused by a photosensitizer, Chlorin e6 (Ce6), was tracked in real time for 5 h during continuous laser irradiation. Plasma membrane behaviors including cell shrinkage, membrane blebbing, and plasma membrane vesiculation could be dynamically recorded. Therefore, the imaging strategy developed in this work may provide a novel platform to investigate plasma membrane behaviors over a relatively long time period.

  10. A Three Step B2B Sales Model Based on Satisfaction Judgments

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe

    2015-01-01

    . The insights produces can be applied for selling companies to craft close collaborative customer relationships in a systematic a d efficient way. The process of building customer relationships will be guided through actions that yields higher satisfaction judgments leading to loyal customers and finally......This paper aims to provide a coherent, detailed and integrative understanding of the mental processes (i.e. dimensions) that industrial buyers apply when forming satisfaction judgments in adjacent to new task buying situations. A qualitative inductive research strategy is utilized in this study...... companies’ perspective. The buying center members applied satisfaction dimension when forming satisfaction judgments. Moreover, the focus and importance of the identified satisfaction dimensions fluctuated pending on the phase of the buying process. Based on the findings a three step sales model is proposed...

  11. A Three Step B2B Sales Model Based on Satisfaction Judgments

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe

    2015-01-01

    . The insights produces can be applied for selling companies to craft close collaborative customer relationships in a systematic ad efficient way. The process of building customer relationships will be guided through actions that yields higher satisfaction judgments leading to loyal customers and finally......This paper aims to provide a coherent, detailed and integrative understanding of the mental processes (i.e. dimensions) that industrial buyers apply when forming satisfaction judgments in adjacent to new task buying situations. A qualitative inductive research strategy is utilized in this study...... companies‘ perspective. The buying center members applied satisfaction dimension when forming satisfaction judgments. Moreover, the focus and importance of the identified satisfaction dimensions fluctuated pending on the phase of the buying process. Based on the findings a three step sales model is proposed...

  12. Analytic observations for the d=1+ 1 bridge site (or single-step) deposition model

    International Nuclear Information System (INIS)

    Evans, J.W.; Kang, H.C.

    1991-01-01

    Some exact results for a reversible version of the d=1+1 bridge site (or single-step) deposition model are presented. Exact steady-state properties are determined directly for finite systems with various mean slopes. These show explicitly how the asymptotic growth velocity and fluctuations are quenched as the slope approaches its maximum allowed value. Next, exact hierarchial equations for the dynamics are presented. For the special case of ''equilibrium growth,'' these are analyzed exactly at the pair-correlation level directly for an infinite system. This provided further insight into asymptotic scaling behavior. Finally, the above hierarchy is compared with one generated from a discrete form of the Kardar--Parisi--Zhang equations. Some differences are described

  13. Compact Two-step Laser Time-of-Flight Mass Spectrometer for in Situ Analyses of Aromatic Organics on Planetary Missions

    Science.gov (United States)

    Getty, Stephanie; Brickerhoff, William; Cornish, Timothy; Ecelberger, Scott; Floyd, Melissa

    2012-01-01

    RATIONALE A miniature time-of-flight mass spectrometer has been adapted to demonstrate two-step laser desorption-ionization (LOI) in a compact instrument package for enhanced organics detection. Two-step LDI decouples the desorption and ionization processes, relative to traditional laser ionization-desorption, in order to produce low-fragmentation conditions for complex organic analytes. Tuning UV ionization laser energy allowed control ofthe degree of fragmentation, which may enable better identification of constituent species. METHODS A reflectron time-of-flight mass spectrometer prototype measuring 20 cm in length was adapted to a two-laser configuration, with IR (1064 nm) desorption followed by UV (266 nm) postionization. A relatively low ion extraction voltage of 5 kV was applied at the sample inlet. Instrument capabilities and performance were demonstrated with analysis of a model polycyclic aromatic hydrocarbon, representing a class of compounds important to the fields of Earth and planetary science. RESULTS L2MS analysis of a model PAH standard, pyrene, has been demonstrated, including parent mass identification and the onset o(tunable fragmentation as a function of ionizing laser energy. Mass resolution m/llm = 380 at full width at half-maximum was achieved which is notable for gas-phase ionization of desorbed neutrals in a highly-compact mass analyzer. CONCLUSIONS Achieving two-step laser mass spectrometry (L2MS) in a highly-miniature instrument enables a powerful approach to the detection and characterization of aromatic organics in remote terrestrial and planetary applications. Tunable detection of parent and fragment ions with high mass resolution, diagnostic of molecular structure, is possible on such a compact L2MS instrument. Selectivity of L2MS against low-mass inorganic salt interferences is a key advantage when working with unprocessed, natural samples, and a mechanism for the observed selectivity is presented.

  14. A journey of a thousand miles begins with one small step - human agency, hydrological processes and time in socio-hydrology

    Science.gov (United States)

    Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.

    2014-04-01

    When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.

  15. Integration of FULLSWOF2D and PeanoClaw: Adaptivity and Local Time-Stepping for Complex Overland Flows

    KAUST Repository

    Unterweger, K.

    2015-01-01

    © Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.

  16. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  17. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  18. A Novel Molten Salt Reactor Concept to Implement the Multi-Step Time-Scheduled Transmutation Strategy

    International Nuclear Information System (INIS)

    Csom, Gyula; Feher, Sandor; Szieberthj, Mate

    2002-01-01

    Nowadays the molten salt reactor (MSR) concept seems to revive as one of the most promising systems for the realization of transmutation. In the molten salt reactors and subcritical systems the fuel and material to be transmuted circulate dissolved in some molten salt. The main advantage of this reactor type is the possibility of the continuous feed and reprocessing of the fuel. In the present paper a novel molten salt reactor concept is introduced and its transmutation capabilities are studied. The goal is the development of a transmutation technique along with a device implementing it, which yield higher transmutation efficiencies than that of the known procedures and thus results in radioactive waste whose load on the environment is reduced both in magnitude and time length. The procedure is the multi-step time-scheduled transmutation, in which transformation is done in several consecutive steps of different neutron flux and spectrum. In the new MSR concept, named 'multi-region' MSR (MRMSR), the primary circuit is made up of a few separate loops, in which salt-fuel mixtures of different compositions are circulated. The loop sections constituting the core region are only neutronically and thermally coupled. This new concept makes possible the utilization of the spatial dependence of spectrum as well as the advantageous features of liquid fuel such as the possibility of continuous chemical processing etc. In order to compare a 'conventional' MSR and a proposed MRMSR in terms of efficiency, preliminary calculational results are shown. Further calculations in order to find the optimal implementation of this new concept and to emphasize its other advantageous features are going on. (authors)

  19. The ATP hydrolysis and phosphate release steps control the time course of force development in rabbit skeletal muscle.

    Science.gov (United States)

    Sleep, John; Irving, Malcolm; Burton, Kevin

    2005-03-15

    The time course of isometric force development following photolytic release of ATP in the presence of Ca(2+) was characterized in single skinned fibres from rabbit psoas muscle. Pre-photolysis force was minimized using apyrase to remove contaminating ATP and ADP. After the initial force rise induced by ATP release, a rapid shortening ramp terminated by a step stretch to the original length was imposed, and the time course of the subsequent force redevelopment was again characterized. Force development after ATP release was accurately described by a lag phase followed by one or two exponential components. At 20 degrees C, the lag was 5.6 +/- 0.4 ms (s.e.m., n = 11), and the force rise was well fitted by a single exponential with rate constant 71 +/- 4 s(-1). Force redevelopment after shortening-restretch began from about half the plateau force level, and its single-exponential rate constant was 68 +/- 3 s(-1), very similar to that following ATP release. When fibres were activated by the addition of Ca(2+) in ATP-containing solution, force developed more slowly, and the rate constant for force redevelopment following shortening-restretch reached a maximum value of 38 +/- 4 s(-1) (n = 6) after about 6 s of activation. This lower value may be associated with progressive sarcomere disorder at elevated temperature. Force development following ATP release was much slower at 5 degrees C than at 20 degrees C. The rate constant of a single-exponential fit to the force rise was 4.3 +/- 0.4 s(-1) (n = 22), and this was again similar to that after shortening-restretch in the same activation at this temperature, 3.8 +/- 0.2 s(-1). We conclude that force development after ATP release and shortening-restretch are controlled by the same steps in the actin-myosin ATPase cycle. The present results and much previous work on mechanical-chemical coupling in muscle can be explained by a kinetic scheme in which force is generated by a rapid conformational change bracketed by two

  20. Modeling seasonality in bimonthly time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  1. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  2. Calibration of an estuarine sediment transport model to sediment fluxes as an intermediate step for simulation of geomorphic evolution

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.

    2009-01-01

    Modeling geomorphic evolution in estuaries is necessary to model the fate of legacy contaminants in the bed sediment and the effect of climate change, watershed alterations, sea level rise, construction projects, and restoration efforts. Coupled hydrodynamic and sediment transport models used for this purpose typically are calibrated to water level, currents, and/or suspended-sediment concentrations. However, small errors in these tidal-timescale models can accumulate to cause major errors in geomorphic evolution, which may not be obvious. Here we present an intermediate step towards simulating decadal-timescale geomorphic change: calibration to estimated sediment fluxes (mass/time) at two cross-sections within an estuary. Accurate representation of sediment fluxes gives confidence in representation of sediment supply to and from the estuary during those periods. Several years of sediment flux data are available for the landward and seaward boundaries of Suisun Bay, California, the landward-most embayment of San Francisco Bay. Sediment flux observations suggest that episodic freshwater flows export sediment from Suisun Bay, while gravitational circulation during the dry season imports sediment from seaward sources. The Regional Oceanic Modeling System (ROMS), a three-dimensional coupled hydrodynamic/sediment transport model, was adapted for Suisun Bay, for the purposes of hindcasting 19th and 20th century bathymetric change, and simulating geomorphic response to sea level rise and climatic variability in the 21st century. The sediment transport parameters were calibrated using the sediment flux data from 1997 (a relatively wet year) and 2004 (a relatively dry year). The remaining years of data (1998, 2002, 2003) were used for validation. The model represents the inter-annual and annual sediment flux variability, while net sediment import/export is accurately modeled for three of the five years. The use of sediment flux data for calibrating an estuarine geomorphic

  3. Models of human adamantinomatous craniopharyngioma tissue: Steps toward an effective adjuvant treatment.

    Science.gov (United States)

    Hölsken, Annett; Buslei, Rolf

    2017-05-01

    Even though ACP is a benign tumor, treatment is challenging because of the tumor's eloquent location. Today, with the exception of surgical intervention and irradiation, further treatment options are limited. However, ongoing molecular research in this field provides insights into the pathways involved in ACP pathogenesis and reveal a plethora of druggable targets. In the next step, appropriate models are essential to identify the most suitable and effective substances for clinical practice. Primary cell cultures in low passages provide a proper and rapid tool for initial drug potency testing. The patient-derived xenograft (PDX) model accommodates ACP complexity in that it shows respect to the preserved architecture and similar histological appearance to human tumors and therefore provides the most appropriate means for analyzing pharmacological efficacy. Nevertheless, further research is needed to understand in more detail the biological background of ACP pathogenesis, which provides the identification of the best targets in the hierarchy of signaling cascades. ACP models are also important for the continuous testing of new targeting drugs, to establish precision medicine. © 2017 International Society of Neuropathology.

  4. Alcoholics Anonymous and twelve-step recovery: a model based on social and cognitive neuroscience.

    Science.gov (United States)

    Galanter, Marc

    2014-01-01

    In the course of achieving abstinence from alcohol, longstanding members of Alcoholics Anonymous (AA) typically experience a change in their addiction-related attitudes and behaviors. These changes are reflective of physiologically grounded mechanisms which can be investigated within the disciplines of social and cognitive neuroscience. This article is designed to examine recent findings associated with these disciplines that may shed light on the mechanisms underlying this change. Literature review and hypothesis development. Pertinent aspects of the neural impact of drugs of abuse are summarized. After this, research regarding specific brain sites, elucidated primarily by imaging techniques, is reviewed relative to the following: Mirroring and mentalizing are described in relation to experimentally modeled studies on empathy and mutuality, which may parallel the experiences of social interaction and influence on AA members. Integration and retrieval of memories acquired in a setting like AA are described, and are related to studies on storytelling, models of self-schema development, and value formation. A model for ascription to a Higher Power is presented. The phenomena associated with AA reflect greater complexity than the empirical studies on which this article is based, and certainly require further elucidation. Despite this substantial limitation in currently available findings, there is heuristic value in considering the relationship between the brain-based and clinical phenomena described here. There are opportunities for the study of neuroscientific correlates of Twelve-Step-based recovery, and these can potentially enhance our understanding of related clinical phenomena. © American Academy of Addiction Psychiatry.

  5. Survey of time preference, delay discounting models

    Directory of Open Access Journals (Sweden)

    John R. Doyle

    2013-03-01

    Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade

  6. An Iterative Ensemble Kalman Filter with One-Step-Ahead Smoothing for State-Parameters Estimation of Contaminant Transport Models

    KAUST Repository

    Gharamti, M. E.

    2015-05-11

    The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model

  7. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  8. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  9. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  10. Phototransduction early steps model based on Beer-Lambert optical law.

    Science.gov (United States)

    Salido, Ezequiel M; Servalli, Leonardo N; Gomez, Juan Carlos; Verrastro, Claudio

    2017-02-01

    The amount of available rhodopsin on the photoreceptor outer segment and its change over time is not considered in classic models of phototransduction. Thus, those models do not take into account the absorptance variation of the outer segment under different brightness conditions. The relationship between the light absorbed by a medium and its absorptance is well described by the Beer-Lambert law. This newly proposed model implements the absorptance variation phenomenon in a set of equations that admit photons per second as input and results in active rhodopsins per second as output. This study compares the classic model of phototransduction developed by Forti et al. (1989) to this new model by using different light stimuli to measure active rhodopsin and photocurrent. The results show a linear relationship between light stimulus and active rhodopsin in the Forti model and an exponential saturation in the new model. Further, photocurrent values have shown that the new model behaves equivalently to the experimental and theoretical data as published by Forti in dark-adapted rods, but fits significantly better under light-adapted conditions. The new model successfully introduced a physics optical law to the standard model of phototransduction adding a new processing layer that had not been mathematically implemented before. In addition, it describes the physiological concept of saturation and delivers outputs in concordance to input magnitudes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian S.

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini......-world are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  12. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  13. Constitutive model with time-dependent deformations

    DEFF Research Database (Denmark)

    Krogsbøll, Anette

    1998-01-01

    are common in time as well as size. This problem is adressed by means of a new constitutive model for soils. It is able to describe the behavior of soils at different deformation rates. The model defines time-dependent and stress-related deformations separately. They are related to each other and they occur...... was the difference in time scale between the geological process of deposition (millions of years) and the laboratory measurements of mechanical properties (minutes or hours). In addition, the time scale relevant to the production history of the oil field was interesting (days or years)....

  14. Developing a framework to model the primary drying step of a continuous freeze-drying process based on infrared radiation

    DEFF Research Database (Denmark)

    Van Bockstal, Pieter-Jan; Corver, Jos; Mortier, Séverine Thérèse F.C.

    2018-01-01

    . These results assist in the selection of proper materials which could serve as IR window in the continuous freeze-drying prototype. The modelling framework presented in this paper fits the model-based design approach used for the development of this prototype and shows the potential benefits of this design...... requires the fundamental mechanistic modelling of each individual process step. Therefore, a framework is presented for the modelling and control of the continuous primary drying step based on non-contact IR radiation. The IR radiation emitted by the radiator filaments passes through various materials...

  15. Process analysis and modeling of a single-step lutein extraction method for wet microalgae.

    Science.gov (United States)

    Gong, Mengyue; Wang, Yuruihan; Bassi, Amarjeet

    2017-11-01

    Lutein is a commercial carotenoid with potential health benefits. Microalgae are alternative sources for the lutein production in comparison to conventional approaches using marigold flowers. In this study, a process analysis of a single-step simultaneous extraction, saponification, and primary purification process for free lutein production from wet microalgae biomass was carried out. The feasibility of binary solvent mixtures for wet biomass extraction was successfully demonstrated, and the extraction kinetics of lutein from chloroplast in microalgae were first evaluated. The effects of types of organic solvent, solvent polarity, cell disruption method, and alkali and solvent usage on lutein yields were examined. A mathematical model based on Fick's second law of diffusion was applied to model the experimental data. The mass transfer coefficients were used to estimate the extraction rates. The extraction rate was found more significantly related with alkali ratio to solvent than to biomass. The best conditions for extraction efficiency were found to be pre-treatment with ultrasonication at 0.5 s working cycle per second, react 0.5 h in 0.27 L/g solvent to biomass ratio, and 1:3 ether/ethanol (v/v) with 1.25 g KOH/L. The entire process can be controlled within 1 h and yield over 8 mg/g lutein, which is more economical for scale-up.

  16. Double ionization of atoms by ion impact: two-step models

    Energy Technology Data Exchange (ETDEWEB)

    Fiori, Marcelo [Departamento de Fisica, Universidad Nacional de Salta, Salta (Argentina); Rocha, A B [Instituto de Quimica, Departamento de FIsico-Quimica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 21949-900, RJ (Brazil); Bielschowsky, C E [Instituto de Quimica, Departamento de FIsico-Quimica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 21949-900, RJ (Brazil); Jalbert, Ginette [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, 21941-972, RJ (Brazil); Garibotti, C R [CONICET and Centro Atomico Bariloche, 8400 S. C. Bariloche, RIo Negro (Argentina)

    2006-04-14

    Total cross sections for the double ionization of He and Li atoms by the impact of H{sup +}, He{sup 2+} and Li{sup 3+} are calculated at intermediate and high energies within two-step models. The double ionization of He by the impact of other bare projectiles at a fixed energy is obtained as well. Single ionization probabilities are calculated within the continuum distorted wave -eikonal-initial-state (CDW-EIS) approximation. The required atomic bound and continuum wave functions are evaluated by numerically solving the atomic wave equation with an optimized potential model (OPM). Correlation between events is introduced by considering ion relaxation. The final state electronic correlation is considered by means of the so-called Gamow factor. We compare the transition probabilities resulting from our approach with those resulting from the use of a Rootham-Hartree-Fock initial state and a Coulomb continuum state with an effective charge. We find that the use of OPM waves gives a better agreement with the experimental results than with Coulomb waves.

  17. Dance-the-Music: an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Science.gov (United States)

    Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc

    2012-12-01

    In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.

  18. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  19. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  20. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  1. [Validation of a triage scale: first step in patient admission and in emergency service models].

    Science.gov (United States)

    Legrand, A; Thys, F; Vermeiren, E; Touwaide, M; D'Hoore, W; Hubin, V; Reynaert, M S

    2003-03-01

    At present, most emergency services handle the multitude of various demands in the same unity of place and by the same team of nurses aides, with direct consequences on the waiting time and in the handling of problems of varying degrees of importance. Our service examines other administrative models based on a triage of time and of orientation. In a prospective study on 679 patients, we have validated a triage tool inspired from the ICEM model (International Cooperation of Emergency Medicine) allowing patients to receive, while they wait, information and training, based on the resources provided, in order to deal with their particular medical problem. The validation of this tool was carried out in terms of its utilization as well as its reliability. It appears that, with the type of triage offered, there is a theoretical reserve of waiting time for the patients in which the urgency is relative, and which could be better used in the handling of more vital cases.

  2. Electricity price modeling with stochastic time change

    International Nuclear Information System (INIS)

    Borovkova, Svetlana; Schmeck, Maren Diane

    2017-01-01

    In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.

  3. Webinar Presentation: Environmental Exposures and Health Risks in California Child Care Facilities: First Steps to Improve Environmental Health where Children Spend Time

    Science.gov (United States)

    This presentation, Environmental Exposures and Health Risks in California Child Care Facilities: First Steps to Improve Environmental Health where Children Spend Time, was given at the NIEHS/EPA Children's Centers 2016 Webinar Series: Exposome.

  4. Time series modeling in traffic safety research.

    Science.gov (United States)

    Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue

    2018-08-01

    The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Discrete-time rewards model-checked

    NARCIS (Netherlands)

    Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.

    2003-01-01

    This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and

  6. Modeling vector nonlinear time series using POLYMARS

    NARCIS (Netherlands)

    de Gooijer, J.G.; Ray, B.K.

    2003-01-01

    A modified multivariate adaptive regression splines method for modeling vector nonlinear time series is investigated. The method results in models that can capture certain types of vector self-exciting threshold autoregressive behavior, as well as provide good predictions for more general vector

  7. Forecasting with periodic autoregressive time series models

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1999-01-01

    textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption

  8. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    ... their high frequency content while among TEM data sets with low frequency content, the averaging times for the FEM ellipticity were shorter than the TEM quality. Keywords: ellipticity, frequency domain, frequency electromagnetic method, model parameter, orientation error, time domain, transient electromagnetic method

  9. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  10. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  11. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    Science.gov (United States)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  12. Sensitivity analysis of model output - a step towards robust safety indicators?

    International Nuclear Information System (INIS)

    Broed, R.; Pereira, A.; Moberg, L.

    2004-01-01

    The protection of the environment from ionising radiation challenges the radioecological community with the issue of harmonising disparate safety indicators. These indicators should preferably cover the whole spectrum of model predictions on chemo-toxic and radiation impact of contaminants. In question is not only the protection of man and biota but also of abiotic systems. In many cases modelling will constitute the basis for an evaluation of potential impact. It is recognised that uncertainty and sensitivity analysis of model output will play an important role in the 'construction' of safety indicators that are robust, reliable and easy to explain to all groups of stakeholders including the general public. However, environmental models of transport of radionuclides have some extreme characteristics. They are, a) complex, b) non-linear, c) include a huge number of input parameters, d) input parameters are associated with large or very large uncertainties, e) parameters are often correlated to each other, f) uncertainties other than parameter-driven may be present in the modelling system, g) space variability and time-dependence of parameters are present, h) model predictions may cover geological time scales. Consequently, uncertainty and sensitivity analysis are non-trivial tasks, challenging the decision-maker when it comes to the interpretation of safety indicators or the application of regulatory criteria. In this work we use the IAEA model ISAM, to make a set of Monte Carlo calculations. The ISAM model includes several nuclides and decay chains, many compartments and variable parameters covering the range of nuclide migration pathways from the near field to the biosphere. The goal of our calculations is to make a global sensitivity analysis. After extracting the non-influential parameters, the M.C. calculations are repeated with those parameters frozen. Reducing the number of parameters to a few ones will simplify the interpretation of the results and the use

  13. Color Shift Modeling of Light-Emitting Diode Lamps in Step-Loaded Stress Testing

    NARCIS (Netherlands)

    Cai, Miao; Yang, Daoguo; Huang, J.; Zhang, Maofen; Chen, Xianping; Liang, Caihang; Koh, S.W.; Zhang, G.Q.

    2017-01-01

    The color coordinate shift of light-emitting diode (LED) lamps is investigated by running three stress-loaded testing methods, namely step-up stress accelerated degradation testing, step-down stress accelerated degradation testing, and constant stress accelerated degradation testing. A power

  14. Discrete-time modelling of musical instruments

    International Nuclear Information System (INIS)

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  15. Invited Commentary: Little Steps Lead to Huge Steps-It's Time to Make Physical Inactivity Our Number 1 Public Health Enemy.

    Science.gov (United States)

    Church, Timothy S

    2016-11-01

    The analysis plan and article in this issue of the Journal by Evenson et al. (Am J Epidemiol 2016;184(9):621-632) is well-conceived, thoughtfully conducted, and tightly written. The authors utilized the National Health and Nutrition Examination Survey data set to examine the association between accelerometer-measured physical activity level and mortality and found that meeting the 2013 federal Physical Activity Guidelines resulted in a 35% reduction in risk of mortality. The timing of these findings could not be better, given the ubiquitous nature of personal accelerometer devices. The masses are already equipped to routinely quantify their activity, and now we have the opportunity and responsibility to provide evidenced-based, tailored physical activity goals. We have evidenced-based physical activity guidelines, mass distribution of devices to track activity, and now scientific support indicating that meeting the physical activity goal, as assessed by these devices, has substantial health benefits. All of the pieces are in place to make physical inactivity a national priority, and we now have the opportunity to positively affect the health of millions of Americans. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  17. On the impacts of coarse-scale models of realistic roughness on a forward-facing step turbulent flow

    International Nuclear Information System (INIS)

    Wu, Yanhua; Ren, Huiying

    2013-01-01

    Highlights: ► Discrete wavelet transform was used to produce coarse-scale models of roughness. ► PIV were performed in a forward-facing step flow with roughness of different scales. ► Impacts of roughness scales on various turbulence statistics were studied. -- Abstract: The present work explores the impacts of the coarse-scale models of realistic roughness on the turbulent boundary layers over forward-facing steps. The surface topographies of different scale resolutions were obtained from a novel multi-resolution analysis using discrete wavelet transform. PIV measurements are performed in the streamwise–wall-normal (x–y) planes at two different spanwise positions in turbulent boundary layers at Re h = 3450 and δ/h = 8, where h is the mean step height and δ is the incoming boundary layer thickness. It was observed that large-scale but low-amplitude roughness scales had small effects on the forward-facing step turbulent flow. For the higher-resolution model of the roughness, the turbulence characteristics within 2h downstream of the steps are observed to be distinct from those over the original realistic rough step at a measurement position where the roughness profile possesses a positive slope immediately after the step’s front. On the other hand, much smaller differences exist in the flow characteristics at the other measurement position whose roughness profile possesses a negative slope following the step’s front

  18. Probabilistic parameter estimation in a 2-step chemical kinetics model for n-dodecane jet autoignition

    Science.gov (United States)

    Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph

    2018-05-01

    This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.

  19. Multi-Step Usage of in Vivo Models During Rational Drug Design and Discovery

    Directory of Open Access Journals (Sweden)

    Charles H. Williams

    2011-04-01

    Full Text Available In this article we propose a systematic development method for rational drug design while reviewing paradigms in industry, emerging techniques and technologies in the field. Although the process of drug development today has been accelerated by emergence of computational methodologies, it is a herculean challenge requiring exorbitant resources; and often fails to yield clinically viable results. The current paradigm of target based drug design is often misguided and tends to yield compounds that have poor absorption, distribution, metabolism, and excretion, toxicology (ADMET properties. Therefore, an in vivo organism based approach allowing for a multidisciplinary inquiry into potent and selective molecules is an excellent place to begin rational drug design. We will review how organisms like the zebrafish and Caenorhabditis elegans can not only be starting points, but can be used at various steps of the drug development process from target identification to pre-clinical trial models. This systems biology based approach paired with the power of computational biology; genetics and developmental biology provide a methodological framework to avoid the pitfalls of traditional target based drug design.

  20. A Two-Step Hybrid Approach for Modeling the Nonlinear Dynamic Response of Piezoelectric Energy Harvesters

    Directory of Open Access Journals (Sweden)

    Claudio Maruccio

    2018-01-01

    Full Text Available An effective hybrid computational framework is described here in order to assess the nonlinear dynamic response of piezoelectric energy harvesting devices. The proposed strategy basically consists of two steps. First, fully coupled multiphysics finite element (FE analyses are performed to evaluate the nonlinear static response of the device. An enhanced reduced-order model is then derived, where the global dynamic response is formulated in the state-space using lumped coefficients enriched with the information derived from the FE simulations. The electromechanical response of piezoelectric beams under forced vibrations is studied by means of the proposed approach, which is also validated by comparing numerical predictions with some experimental results. Such numerical and experimental investigations have been carried out with the main aim of studying the influence of material and geometrical parameters on the global nonlinear response. The advantage of the presented approach is that the overall computational and experimental efforts are significantly reduced while preserving a satisfactory accuracy in the assessment of the global behavior.

  1. Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS

    Directory of Open Access Journals (Sweden)

    Nicolas Sommet

    2017-09-01

    Full Text Available This paper aims to introduce multilevel logistic regression analysis in a simple and practical way. First, we introduce the basic principles of logistic regression analysis (conditional probability, logit transformation, odds ratio. Second, we discuss the two fundamental implications of running this kind of analysis with a nested data structure: In multilevel logistic regression, the odds that the outcome variable equals one (rather than zero may vary from one cluster to another (i.e. the intercept may vary and the effect of a lower-level variable may also vary from one cluster to another (i.e. the slope may vary. Third and finally, we provide a simplified three-step “turnkey” procedure for multilevel logistic regression modeling: -Preliminary phase: Cluster- or grand-mean centering variables -Step #1: Running an empty model and calculating the intraclass correlation coefficient (ICC -Step #2: Running a constrained and an augmented intermediate model and performing a likelihood ratio test to determine whether considering the cluster-based variation of the effect of the lower-level variable improves the model fit -Step #3 Running a final model and interpreting the odds ratio and confidence intervals to determine whether data support your hypothesis Command syntax for Stata, R, Mplus, and SPSS are included. These steps will be applied to a study on Justin Bieber, because everybody likes Justin Bieber.1

  2. Identification of Dobrava, Hantaan, Seoul, and Puumala viruses by one-step real-time RT-PCR.

    Science.gov (United States)

    Aitichou, Mohamed; Saleh, Sharron S; McElroy, Anita K; Schmaljohn, C; Ibrahim, M Sofi

    2005-03-01

    We developed four assays for specifically identifying Dobrava (DOB), Hantaan (HTN), Puumala (PUU), and Seoul (SEO) viruses. The assays are based on the real-time one-step reverse transcriptase polymerase chain reaction (RT-PCR) with the small segment used as the target sequence. The detection limits of DOB, HTN, PUU, and SEO assays were 25, 25, 25, and 12.5 plaque-forming units, respectively. The assays were evaluated in blinded experiments, each with 100 samples that contained Andes, Black Creek Canal, Crimean-Congo hemorrhagic fever, Rift Valley fever and Sin Nombre viruses in addition to DOB, HTN, PUU and SEO viruses. The sensitivity levels of the DOB, HTN, PUU, and SEO assays were 98%, 96%, 92% and 94%, respectively. The specificity of DOB, HTN and SEO assays was 100% and the specificity of the PUU assay was 98%. Because of the high levels of sensitivity, specificity, and reproducibility, we believe that these assays can be useful for diagnosing and differentiating these four Old-World hantaviruses.

  3. A two-step real-time PCR assay for quantitation and genotyping of human parvovirus 4.

    Science.gov (United States)

    Väisänen, E; Lahtinen, A; Eis-Hübinger, A M; Lappalainen, M; Hedman, K; Söderlund-Venermo, M

    2014-01-01

    Human parvovirus 4 (PARV4) of the family Parvoviridae was discovered in a plasma sample of a patient with an undiagnosed acute infection in 2005. Currently, three PARV4 genotypes have been identified, however, with an unknown clinical significance. Interestingly, these genotypes seem to differ in epidemiology. In Northern Europe, USA and Asia, genotypes 1 and 2 have been found to occur mainly in persons with a history of injecting drug use or other parenteral exposure. In contrast, genotype 3 appears to be endemic in sub-Saharan Africa, where it infects children and adults without such risk behaviour. In this study, a novel straightforward and cost-efficient molecular assay for both quantitation and genotyping of PARV4 DNA was developed. The two-step method first applies a single-probe pan-PARV4 qPCR for screening and quantitation of this relatively rare virus, and subsequently, only the positive samples undergo a real-time PCR-based multi-probe genotyping. The new qPCR-GT method is highly sensitive and specific regardless of the genotype, and thus being suitable for studying the clinical impact and occurrence of the different PARV4 genotypes. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Potentials and Limitations of Real-Time Elastography for Prostate Cancer Detection: A Whole-Mount Step Section Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Junker

    2012-01-01

    Full Text Available Objectives. To evaluate prostate cancer (PCa detection rates of real-time elastography (RTE in dependence of tumor size, tumor volume, localization and histological type. Materials and Methods. Thirdy-nine patients with biopsy proven PCa underwent RTE before radical prostatectomy (RPE to assess prostate tissue elasticity, and hard lesions were considered suspicious for PCa. After RPE, the prostates were prepared as whole-mount step sections and were compared with imaging findings for analyzing PCa detection rates. Results. RTE detected 6/62 cancer lesions with a maximum diameter of 0–5 mm (9.7%, 10/37 with a maximum diameter of 6–10 mm (27%, 24/34 with a maximum diameter of 11–20 20 mm (70.6%, 14/14 with a maximum diameter of >20 mm (100% and 40/48 with a volume ≥0.2 cm3 (83.3%. Regarding cancer lesions with a volume ≥ 0.2 cm³ there was a significant difference in PCa detection rates between Gleason scores with predominant Gleason pattern 3 compared to those with predominant Gleason pattern 4 or 5 (75% versus 100%; P=0.028. Conclusions. RTE is able to detect PCa of significant tumor volume and of predominant Gleason pattern 4 or 5 with high confidence, but is of limited value in the detection of small cancer lesions.

  5. A Step-Indexed Kripke Model of Hidden State via Recursive Properties on Recursively Defined Metric Spaces

    DEFF Research Database (Denmark)

    Schwinghammer, Jan; Birkedal, Lars; Støvring, Kristian

    2011-01-01

    ´eraud and Pottier’s type and capability system including both frame and anti-frame rules. The model is a possible worlds model based on the operational semantics and step-indexed heap relations, and the worlds are constructed as a recursively defined predicate on a recursively defined metric space. We also extend...

  6. Instability of a two-step Rankine vortex in a reduced gravity QG model

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, Xavier [Laboratoire de Météorologie Dynamique, Ecole Normale Supérieure, 24 rue Lhomond, F-75005 Paris (France); Carton, Xavier, E-mail: xperrot@lmd.ens.fr, E-mail: xcarton@univ-brest.fr [Laboratoire de Physique des Océans, Université de Bretagne Occidentale, 6 avenue Le Gorgeu, F-29200 Brest (France)

    2014-06-01

    We investigate the stability of a steplike Rankine vortex in a one-active-layer, reduced gravity, quasi-geostrophic model. After calculating the linear stability with a normal mode analysis, the singular modes are determined as a function of the vortex shape to investigate short-time stability. Finally we determine the position of the critical layer and show its influence when it lies inside the vortex. (papers)

  7. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    Science.gov (United States)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  8. Step training improves reaction time, gait and balance and reduces falls in older people: a systematic review and meta-analysis.

    Science.gov (United States)

    Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R

    2017-04-01

    To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, prisk ratio=0.51, 95% CI 0.38 to 0.68, pfalls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (pfalls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    Science.gov (United States)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to

  10. Improving Genetic Evaluation of Litter Size Using a Single-step Model

    DEFF Research Database (Denmark)

    Guo, Xiangyu; Christensen, Ole Fredslund; Ostersen, Tage

    A recently developed single-step method allows genetic evaluation based on information from phenotypes, pedigree and markers simultaneously. This paper compared reliabilities of predicted breeding values obtained from single-step method and the traditional pedigree-based method for two litter size...... traits, total number of piglets born (TNB), and litter size at five days after birth (Ls 5) in Danish Landrace and Yorkshire pigs. The results showed that the single-step method combining phenotypic and genotypic information provided more accurate predictions than the pedigree-based method, not only...

  11. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    Science.gov (United States)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  12. Modeling biological pathway dynamics with timed automata.

    Science.gov (United States)

    Schivo, Stefano; Scholma, Jetse; Wanders, Brend; Urquidi Camacho, Ricardo A; van der Vet, Paul E; Karperien, Marcel; Langerak, Rom; van de Pol, Jaco; Post, Janine N

    2014-05-01

    Living cells are constantly subjected to a plethora of environmental stimuli that require integration into an appropriate cellular response. This integration takes place through signal transduction events that form tightly interconnected networks. The understanding of these networks requires capturing their dynamics through computational support and models. ANIMO (analysis of Networks with Interactive Modeling) is a tool that enables the construction and exploration of executable models of biological networks, helping to derive hypotheses and to plan wet-lab experiments. The tool is based on the formalism of Timed Automata, which can be analyzed via the UPPAAL model checker. Thanks to Timed Automata, we can provide a formal semantics for the domain-specific language used to represent signaling networks. This enforces precision and uniformity in the definition of signaling pathways, contributing to the integration of isolated signaling events into complex network models. We propose an approach to discretization of reaction kinetics that allows us to efficiently use UPPAAL as the computational engine to explore the dynamic behavior of the network of interest. A user-friendly interface hides the use of Timed Automata from the user, while keeping the expressive power intact. Abstraction to single-parameter kinetics speeds up construction of models that remain faithful enough to provide meaningful insight. The resulting dynamic behavior of the network components is displayed graphically, allowing for an intuitive and interactive modeling experience.

  13. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the

  14. Modeling imperfectly repaired system data via grey differential equations with unequal-gapped times

    International Nuclear Information System (INIS)

    Guo Renkuan

    2007-01-01

    In this paper, we argue that grey differential equation models are useful in repairable system modeling. The arguments starts with the review on GM(1,1) model with equal- and unequal-spaced stopping time sequence. In terms of two-stage GM(1,1) filtering, system stopping time can be partitioned into system intrinsic function and repair effect. Furthermore, we propose an approach to use grey differential equation to specify a semi-statistical membership function for system intrinsic function times. Also, we engage an effort to use GM(1,N) model to model system stopping times and the associated operating covariates and propose an unequal-gapped GM(1,N) model for such analysis. Finally, we investigate the GM(1,1)-embed systematic grey equation system modeling of imperfectly repaired system operating data. Practical examples are given in step-by-step manner to illustrate the grey differential equation modeling of repairable system data

  15. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  16. Generalized Runge-Kutta method for two- and three-dimensional space-time diffusion equations with a variable time step

    International Nuclear Information System (INIS)

    Aboanber, A.E.; Hamada, Y.M.

    2008-01-01

    An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods

  17. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  18. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  19. A two-phase model of plantar tissue: a step toward prediction of diabetic foot ulceration.

    Science.gov (United States)

    Sciumè, G; Boso, D P; Gray, W G; Cobelli, C; Schrefler, B A

    2014-11-01

    A new computational model, based on the thermodynamically constrained averaging theory, has been recently proposed to predict tumor initiation and proliferation. A similar mathematical approach is proposed here as an aid in diabetic ulcer prevention. The common aspects at the continuum level are the macroscopic balance equations governing the flow of the fluid phase, diffusion of chemical species, tissue mechanics, and some of the constitutive equations. The soft plantar tissue is modeled as a two-phase system: a solid phase consisting of the tissue cells and their extracellular matrix, and a fluid one (interstitial fluid and dissolved chemical species). The solid phase may become necrotic depending on the stress level and on the oxygen availability in the tissue. Actually, in diabetic patients, peripheral vascular disease impacts tissue necrosis; this is considered in the model via the introduction of an effective diffusion coefficient that governs transport of nutrients within the microvasculature. The governing equations of the mathematical model are discretized in space by the finite element method and in time domain using the θ-Wilson Method. While the full mathematical model is developed in this paper, the example is limited to the simulation of several gait cycles of a healthy foot. Copyright © 2014 John Wiley & Sons, Ltd.

  20. STEP-TRAMM - A modeling interface for simulating localized rainfall induced shallow landslides and debris flow runout pathways

    Science.gov (United States)

    Or, D.; von Ruette, J.; Lehmann, P.

    2017-12-01

    Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.

  1. First step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 code

    International Nuclear Information System (INIS)

    Dominguez, L.; Camargo, C.T.M.

    1984-09-01

    The first step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 computer code is presented. This step consists of the introduction of a simplified model for simulating the steam generator. This model is the GEVAP computer code, integrant part of LOOP code, which simulates the primary coolant circuit of PWR nuclear power plants during transients. The ALMOD3 computer code has a model for the steam generator, called UTSG, which is very detailed. This model has spatial dependence, correlations for 2-phase flow, distinguished correlations for different heat transfer process. The GEVAP model has thermal equilibrium between phases (gaseous and liquid homogeneous mixture), no spatial dependence and uses only one generalized correlation to treat several heat transfer processes. (Author) [pt

  2. A two-step ionospheric modeling algorithm considering the impact of GLONASS pseudo-range inter-channel biases

    Science.gov (United States)

    Zhang, Rui; Yao, Yi-bin; Hu, Yue-ming; Song, Wei-wei

    2017-12-01

    The Global Navigation Satellite System presents a plausible and cost-effective way of computing the total electron content (TEC). But TEC estimated value could be seriously affected by the differential code biases (DCB) of frequency-dependent satellites and receivers. Unlike GPS and other satellite systems, GLONASS adopts a frequency-division multiplexing access mode to distinguish different satellites. This strategy leads to different wavelengths and inter-frequency biases (IFBs) for both pseudo-range and carrier phase observations, whose impacts are rarely considered in ionospheric modeling. We obtained observations from four groups of co-stations to analyze the characteristics of the GLONASS receiver P1P2 pseudo-range IFB with a double-difference method. The results showed that the GLONASS P1P2 pseudo-range IFB remained stable for a period of time and could catch up to several meters, which cannot be absorbed by the receiver DCB during ionospheric modeling. Given the characteristics of the GLONASS P1P2 pseudo-range IFB, we proposed a two-step ionosphere modeling method with the priori IFB information. The experimental analysis showed that the new algorithm can effectively eliminate the adverse effects on ionospheric model and hardware delay parameters estimation in different space environments. During high solar activity period, compared to the traditional GPS + GLONASS modeling algorithm, the absolute average deviation of TEC decreased from 2.17 to 2.07 TECu (TEC unit); simultaneously, the average RMS of GPS satellite DCB decreased from 0.225 to 0.219 ns, and the average deviation of GLONASS satellite DCB decreased from 0.253 to 0.113 ns with a great improvement in over 55%.

  3. Takeover times for a simple model of network infection.

    Science.gov (United States)

    Ottino-Löffler, Bertrand; Scott, Jacob G; Strogatz, Steven H

    2017-07-01

    We study a stochastic model of infection spreading on a network. At each time step a node is chosen at random, along with one of its neighbors. If the node is infected and the neighbor is susceptible, the neighbor becomes infected. How many time steps T does it take to completely infect a network of N nodes, starting from a single infected node? An analogy to the classic "coupon collector" problem of probability theory reveals that the takeover time T is dominated by extremal behavior, either when there are only a few infected nodes near the start of the process or a few susceptible nodes near the end. We show that for N≫1, the takeover time T is distributed as a Gumbel distribution for the star graph, as the convolution of two Gumbel distributions for a complete graph and an Erdős-Rényi random graph, as a normal for a one-dimensional ring and a two-dimensional lattice, and as a family of intermediate skewed distributions for d-dimensional lattices with d≥3 (these distributions approach the convolution of two Gumbel distributions as d approaches infinity). Connections to evolutionary dynamics, cancer, incubation periods of infectious diseases, first-passage percolation, and other spreading phenomena in biology and physics are discussed.

  4. Modeling of the time sharing for lecturers

    Directory of Open Access Journals (Sweden)

    E. Yu. Shakhova

    2017-01-01

    Full Text Available In the context of modernization of the Russian system of higher education, it is necessary to analyze the working time of the university lecturers, taking into account both basic job functions as the university lecturer, and others.The mathematical problem is presented for the optimal working time planning for the university lecturers. The review of the documents, native and foreign works on the study is made. Simulation conditions, based on analysis of the subject area, are defined. Models of optimal working time sharing of the university lecturers («the second half of the day» are developed and implemented in the system MathCAD. Optimal solutions have been obtained.Three problems have been solved:1 to find the optimal time sharing for «the second half of the day» in a certain position of the university lecturer;2 to find the optimal time sharing for «the second half of the day» for all positions of the university lecturers in view of the established model of the academic load differentiation;3 to find the volume value of the non-standardized part of time work in the department for the academic year, taking into account: the established model of an academic load differentiation, distribution of the Faculty number for the positions and the optimal time sharing «the second half of the day» for the university lecturers of the department.Examples are given of the analysis results. The practical application of the research: the developed models can be used when planning the working time of an individual professor in the preparation of the work plan of the university department for the academic year, as well as to conduct a comprehensive analysis of the administrative decisions in the development of local university regulations.

  5. Human airway organoid engineering as a step toward lung regeneration and disease modeling.

    Science.gov (United States)

    Tan, Qi; Choi, Kyoung Moo; Sicard, Delphine; Tschumperlin, Daniel J

    2017-01-01

    Organoids represent both a potentially powerful tool for the study cell-cell interactions within tissue-like environments, and a platform for tissue regenerative approaches. The development of lung tissue-like organoids from human adult-derived cells has not previously been reported. Here we combined human adult primary bronchial epithelial cells, lung fibroblasts, and lung microvascular endothelial cells in supportive 3D culture conditions to generate airway organoids. We demonstrate that randomly-seeded mixed cell populations undergo rapid condensation and self-organization into discrete epithelial and endothelial structures that are mechanically robust and stable during long term culture. After condensation airway organoids generate invasive multicellular tubular structures that recapitulate limited aspects of branching morphogenesis, and require actomyosin-mediated force generation and YAP/TAZ activation. Despite the proximal source of primary epithelium used in the airway organoids, discrete areas of both proximal and distal epithelial markers were observed over time in culture, demonstrating remarkable epithelial plasticity within the context of organoid cultures. Airway organoids also exhibited complex multicellular responses to a prototypical fibrogenic stimulus (TGF-β1) in culture, and limited capacity to undergo continued maturation and engraftment after ectopic implantation under the murine kidney capsule. These results demonstrate that the airway organoid system developed here represents a novel tool for the study of disease-relevant cell-cell interactions, and establishes this platform as a first step toward cell-based therapy for chronic lung diseases based on de novo engineering of implantable airway tissues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Dynamical Constants and Time Universals: A First Step toward a Metrical Definition of Ordered and Abnormal Cognition.

    Science.gov (United States)

    Elliott, Mark A; du Bois, Naomi

    2017-01-01

    From the point of view of the cognitive dynamicist the organization of brain circuitry into assemblies defined by their synchrony at particular (and precise) oscillation frequencies is important for the correct correlation of all independent cortical responses to the different aspects of a given complex thought or object. From the point of view of anyone operating complex mechanical systems, i.e., those comprising independent components that are required to interact precisely in time, it follows that the precise timing of such a system is essential - not only essential but measurable, and scalable. It must also be reliable over observations to bring about consistent behavior, whatever that behavior is. The catastrophic consequence of an absence of such precision, for instance that required to govern the interference engine in many automobiles, is indicative of how important timing is for the function of dynamical systems at all levels of operation. The dynamics and temporal considerations combined indicate that it is necessary to consider the operating characteristic of any dynamical, cognitive brain system in terms, superficially at least, of oscillation frequencies. These may, themselves, be forensic of an underlying time-related taxonomy. Currently there are only two sets of relevant and necessarily systematic observations in this field: one of these reports the precise dynamical structure of the perceptual systems engaged in dynamical binding across form and time; the second, derived both empirically from perceptual performance data, as well as obtained from theoretical models, demonstrates a timing taxonomy related to a fundamental operator referred to as the time quantum. In this contribution both sets of theory and observations are reviewed and compared for their predictive consistency. Conclusions about direct comparability are discussed for both theories of cognitive dynamics and time quantum models. Finally, a brief review of some experimental data

  7. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  8. Age-related differences in lower-limb force-time relation during the push-off in rapid voluntary stepping.

    Science.gov (United States)

    Melzer, I; Krasovsky, T; Oddsson, L I E; Liebermann, D G

    2010-12-01

    This study investigated the force-time relationship during the push-off stage of a rapid voluntary step in young and older healthy adults, to study the assumption that when balance is lost a quick step may preserve stability. The ability to achieve peak propulsive force within a short time is critical for the performance of such a quick powerful step. We hypothesized that older adults would achieve peak force and power in significantly longer times compared to young people, particularly during the push-off preparatory phase. Fifteen young and 15 older volunteers performed rapid forward steps while standing on a force platform. Absolute anteroposterior and body weight normalized vertical forces during the push-off in the preparation and swing phases were used to determine time to peak and peak force, and step power. Two-way analyses of variance ('Group' [young-older] by 'Phase' [preparation-swing]) were used to assess our hypothesis (P ≤ 0.05). Older people exerted lower peak forces (anteroposterior and vertical) than young adults, but not necessarily lower peak power. More significantly, they showed a longer time to peak force, particularly in the vertical direction during the preparation phase. Older adults generate propulsive forces slowly and reach lower magnitudes, mainly during step preparation. The time to achieve a peak force and power, rather than its actual magnitude, may account for failures in quickly performing a preventive action. Such delay may be associated with the inability to react and recruit muscles quickly. Thus, training elderly to step fast in response to relevant cues may be beneficial in the prevention of falls. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Improving pain care through implementation of the Stepped Care Model at a multisite community health center

    Directory of Open Access Journals (Sweden)

    Anderson DR

    2016-11-01

    Full Text Available Daren R Anderson,1 Ianita Zlateva,1 Emil N Coman,2 Khushbu Khatri,1 Terrence Tian,1 Robert D Kerns3 1Weitzman Institute, Community Health Center, Inc., Middletown, 2UCONN Health Disparities Institute, University of Connecticut, Farmington, 3VA Connecticut Healthcare System, West Haven, CT, USA Purpose: Treating pain in primary care is challenging. Primary care providers (PCPs receive limited training in pain care and express low confidence in their knowledge and ability to manage pain effectively. Models to improve pain outcomes have been developed, but not formally implemented in safety net practices where pain is particularly common. This study evaluated the impact of implementing the Stepped Care Model for Pain Management (SCM-PM at a large, multisite Federally Qualified Health Center. Methods: The Promoting Action on Research Implementation in Health Services framework guided the implementation of the SCM-PM. The multicomponent intervention included: education on pain care, new protocols for pain assessment and management, implementation of an opioid management dashboard, telehealth consultations, and enhanced onsite specialty resources. Participants included 25 PCPs and their patients with chronic pain (3,357 preintervention and 4,385 postintervention cared for at Community Health Center, Inc. Data were collected from the electronic health record and supplemented by chart reviews. Surveys were administered to PCPs to assess knowledge, attitudes, and confidence. Results: Providers’ pain knowledge scores increased to an average of 11% from baseline; self-rated confidence in ability to manage pain also increased. Use of opioid treatment agreements and urine drug screens increased significantly by 27.3% and 22.6%, respectively. Significant improvements were also noted in documentation of pain, pain treatment, and pain follow-up. Referrals to behavioral health providers for patients with pain increased by 5.96% (P=0.009. There was no

  10. Space-time modeling of timber prices

    Science.gov (United States)

    Mo Zhou; Joseph Buongriorno

    2006-01-01

    A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...

  11. On modeling panels of time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractThis paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a

  12. Time Series Modelling using Proc Varmax

    DEFF Research Database (Denmark)

    Milhøj, Anders

    2007-01-01

    In this paper it will be demonstrated how various time series problems could be met using Proc Varmax. The procedure is rather new and hence new features like cointegration, testing for Granger causality are included, but it also means that more traditional ARIMA modelling as outlined by Box...

  13. Theory of Time beyond the standard model

    International Nuclear Information System (INIS)

    Poliakov, Eugene S.

    2008-01-01

    A frame of non-uniform time is discussed. A concept of 'flow of time' is presented. The principle of time relativity in analogy with Galilean principle of relativity is set. Equivalence principle is set to state that the outcome of non-uniform time in an inertial frame of reference is equivalent to the outcome of a fictitious gravity force external to the frame of reference. Thus it is flow of time that causes gravity rather than mass. The latter is compared to experimental data achieving precision of up to 0.0003%. It is shown that the law of energy conservation is inapplicable to the frames of non-uniform time. A theoretical model of a physical entity (point mass, photon) travelling in the field of non-uniform time is considered. A generalized law that allows the flow of time to replace classical energy conservation is introduced on the basis of the experiment of Pound and Rebka. It is shown that linear dependence of flow of time on spatial coordinate conforms the inverse square law of universal gravitation and Keplerian mechanics. Momentum is shown to still be conserved

  14. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  15. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    Directory of Open Access Journals (Sweden)

    Craig Cora L

    2011-06-01

    Full Text Available Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Results Girls had a lower median steps/day (10682 versus 11059 for boys and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys. 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, Discussion Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by promoting excess energy intake. Conclusions In this large nationally representative sample, pedometer-determined steps/day was associated with reduced odds of being obese (but not overweight whereas each parent-reported hour spent watching television between school and dinner increased the odds of both overweight and obesity.

  16. Time-resolved measurements of laser-induced diffusion of CO molecules on stepped Pt(111)-surfaces; Zeitaufgeloeste Untersuchung der laser-induzierten Diffusion von CO-Molekuelen auf gestuften Pt(111)-Oberflaechen

    Energy Technology Data Exchange (ETDEWEB)

    Lawrenz, M.

    2007-10-30

    In the present work the dynamics of CO-molecules on a stepped Pt(111)-surface induced by fs-laser pulses at low temperatures was studied by using laser spectroscopy. In the first part of the work, the laser-induced diffusion for the CO/Pt(111)-system could be demonstrated and modelled successfully for step diffusion. At first, the diffusion of CO-molecules from the step sites to the terrace sites on the surface was traced. The experimentally discovered energy transfer time of 500 fs for this process confirms the assumption of an electronically induced process. In the following it was explained how the experimental results were modelled. A friction coefficient which depends on the electron temperature yields a consistent model, whereas for the understanding of the fluence dependence and time-resolved measurements parallel the same set of parameters was used. Furthermore, the analysis was extended to the CO-terrace diffusion. Small coverages of CO were adsorbed to the terraces and the diffusion was detected as the temporal evolution of the occupation of the step sites acting as traps for the diffusing molecules. The additional performed two-pulse correlation measurements also indicate an electronically induced process. At the substrate temperature of 40 K the cross-correlation - where an energy transfer time of 1.8 ps was extracted - suggests also an electronically induced energy transfer mechanism. Diffusion experiments were performed for different substrate temperatures. (orig.)

  17. Correction: Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS

    Directory of Open Access Journals (Sweden)

    Nicolas Sommet

    2017-12-01

    Full Text Available This article details a correction to the article: Sommet, N. & Morselli, D., (2017. Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS. 'International Review of Social Psychology'. 30(1, pp. 203–218. DOI: https://doi.org/10.5334/irsp.90

  18. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    Science.gov (United States)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  19. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  20. Associations between the Objectively Measured Office Environment and Workplace Step Count and Sitting Time: Cross-Sectional Analyses from the Active Buildings Study.

    Science.gov (United States)

    Fisher, Abi; Ucci, Marcella; Smith, Lee; Sawyer, Alexia; Spinney, Richard; Konstantatou, Marina; Marmot, Alexi

    2018-06-01

    Office-based workers spend a large proportion of the day sitting and tend to have low overall activity levels. Despite some evidence that features of the external physical environment are associated with physical activity, little is known about the influence of the spatial layout of the internal environment on movement, and the majority of data use self-report. This study investigated associations between objectively-measured sitting time and activity levels and the spatial layout of office floors in a sample of UK office-based workers. Participants wore activPAL accelerometers for at least three consecutive workdays. Primary outcomes were steps and proportion of sitting time per working hour. Primary exposures were office spatial layout, which was objectively-measured by deriving key spatial variables: 'distance from each workstation to key office destinations', 'distance from participant's workstation to all other workstations', 'visibility of co-workers', and workstation 'closeness'. 131 participants from 10 organisations were included. Fifty-four per cent were female, 81% were white, and the majority had a managerial or professional role (72%) in their organisation. The average proportion of the working hour spent sitting was 0.7 (SD 0.15); participants took on average 444 (SD 210) steps per working hour. Models adjusted for confounders revealed significant negative associations between step count and distance from each workstation to all other office destinations (e.g., B = -4.66, 95% CI: -8.12, -1.12, p office destinations (e.g., B = -6.45, 95% CI: -11.88, -0.41, p office destinations the less they walked, suggesting that changing the relative distance between workstations and other destinations on the same floor may not be the most fruitful target for promoting walking and reducing sitting in the workplace. However, reported effect sizes were very small and based on cross-sectional analyses. The approaches developed in this study could be applied to other

  1. Modelling of subcritical free-surface flow over an inclined backward-facing step in a water channel

    Directory of Open Access Journals (Sweden)

    Šulc Jan

    2012-04-01

    Full Text Available The contribution deals with the experimental and numerical modelling of subcritical turbulent flow in an open channel with an inclined backward-facing step. The step with the inclination angle α = 20° was placed in the water channel of the cross-section 200×200 mm. Experiments were carried out by means of the PIV and LDA measuring techniques. Numerical simulations were executed by means of the commercial software ANSYS CFX 12.0. Numerical results obtained for twoequation models and EARSM turbulence model completed by transport equations for turbulent energy and specific dissipation rate were compared with experimental data. The modelling was concentrated particularly on the development of the flow separation and on the corresponding changes of free surface.

  2. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    Science.gov (United States)

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  3. A Two-Step Model for Assessing Relative Interest in E-Books Compared to Print

    Science.gov (United States)

    Knowlton, Steven A.

    2016-01-01

    Librarians often wish to know whether readers in a particular discipline favor e-books or print books. Because print circulation and e-book usage statistics are not directly comparable, it can be hard to determine the relative interest of readers in the two types of books. This study demonstrates a two-step method by which librarians can assess…

  4. Study of the Inception Length of Flow over Stepped Spillway Models ...

    African Journals Online (AJOL)

    The results showed that the inception (development) length increases as the unit discharge increases and it decreases with an increase in both stepped roughness height and chute angle. The ratio of the development length, in this study, to that of Bauer's was found to be 4:5. Finally, SMM-5 produced the least velocity of ...

  5. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  6. Novel Ordered Stepped-Wedge Cluster Trial Designs for Detecting Ebola Vaccine Efficacy Using a Spatially Structured Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Ibrahim Diakite

    2016-08-01

    Full Text Available During the 2014 Ebola virus disease (EVD outbreak, policy-makers were confronted with difficult decisions on how best to test the efficacy of EVD vaccines. On one hand, many were reluctant to withhold a vaccine that might prevent a fatal disease from study participants randomized to a control arm. On the other, regulatory bodies called for rigorous placebo-controlled trials to permit direct measurement of vaccine efficacy prior to approval of the products. A stepped-wedge cluster study (SWCT was proposed as an alternative to a more traditional randomized controlled vaccine trial to address these concerns. Here, we propose novel "ordered stepped-wedge cluster trial" (OSWCT designs to further mitigate tradeoffs between ethical concerns, logistics, and statistical rigor.We constructed a spatially structured mathematical model of the EVD outbreak in Sierra Leone. We used the output of this model to simulate and compare a series of stepped-wedge cluster vaccine studies. Our model reproduced the observed order of first case occurrence within districts of Sierra Leone. Depending on the infection risk within the trial population and the trial start dates, the statistical power to detect a vaccine efficacy of 90% varied from 14% to 32% for standard SWCT, and from 67% to 91% for OSWCTs for an alpha error of 5%. The model's projection of first case occurrence was robust to changes in disease natural history parameters.Ordering clusters in a step-wedge trial based on the cluster's underlying risk of infection as predicted by a spatial model can increase the statistical power of a SWCT. In the event of another hemorrhagic fever outbreak, implementation of our proposed OSWCT designs could improve statistical power when a step-wedge study is desirable based on either ethical concerns or logistical constraints.

  7. Modeling and Understanding Time-Evolving Scenarios

    Directory of Open Access Journals (Sweden)

    Riccardo Melen

    2015-08-01

    Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.

  8. Discrete time modelization of human pilot behavior

    Science.gov (United States)

    Cavalli, D.; Soulatges, D.

    1975-01-01

    This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.

  9. Linear Parametric Model Checking of Timed Automata

    DEFF Research Database (Denmark)

    Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle

    2001-01-01

    We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...

  10. Multi-step magnetization of the Ising model on a Shastry-Sutherland lattice: a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Huang, W C; Huo, L; Tian, G; Qian, H R; Gao, X S; Qin, M H; Liu, J-M

    2012-01-01

    The magnetization behaviors and spin configurations of the classical Ising model on a Shastry-Sutherland lattice are investigated using Monte Carlo simulations, in order to understand the fascinating magnetization plateaus observed in TmB 4 and other rare-earth tetraborides. The simulations reproduce the 1/2 magnetization plateau by taking into account the dipole-dipole interaction. In addition, a narrow 2/3 magnetization step at low temperature is predicted in our simulation. The multi-step magnetization can be understood as the consequence of the competitions among the spin-exchange interaction, the dipole-dipole interaction, and the static magnetic energy.

  11. Multiple-relaxation-time lattice Boltzmann model for compressible fluids

    International Nuclear Information System (INIS)

    Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun

    2011-01-01

    We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.

  12. Modelling of Patterns in Space and Time

    CERN Document Server

    Murray, James

    1984-01-01

    This volume contains a selection of papers presented at the work­ shop "Modelling of Patterns in Space and Time", organized by the 80nderforschungsbereich 123, "8tochastische Mathematische Modelle", in Heidelberg, July 4-8, 1983. The main aim of this workshop was to bring together physicists, chemists, biologists and mathematicians for an exchange of ideas and results in modelling patterns. Since the mathe­ matical problems arising depend only partially on the particular field of applications the interdisciplinary cooperation proved very useful. The workshop mainly treated phenomena showing spatial structures. The special areas covered were morphogenesis, growth in cell cultures, competition systems, structured populations, chemotaxis, chemical precipitation, space-time oscillations in chemical reactors, patterns in flames and fluids and mathematical methods. The discussions between experimentalists and theoreticians were especially interesting and effective. The editors hope that these proceedings reflect ...

  13. Secondary Special Education. Part I: The "Stepping Stone Model" Designed for Secondary Learning Disabled Students. Part II: Adapting Materials and Curriculum.

    Science.gov (United States)

    Fox, Barbara

    The paper describes the Stepping Stone Model, a model for the remediation and mainstreaming of secondary learning disabled students and the adaptation of curriculum and materials for the model. The Stepping Stone Model is designed to establish the independence of students in the mainstream through content reading. Five areas of concern common to…

  14. Space-time modeling of soil moisture

    Science.gov (United States)

    Chen, Zijuan; Mohanty, Binayak P.; Rodriguez-Iturbe, Ignacio

    2017-11-01

    A physically derived space-time mathematical representation of the soil moisture field is carried out via the soil moisture balance equation driven by stochastic rainfall forcing. The model incorporates spatial diffusion and in its original version, it is shown to be unable to reproduce the relative fast decay in the spatial correlation functions observed in empirical data. This decay resulting from variations in local topography as well as in local soil and vegetation conditions is well reproduced via a jitter process acting multiplicatively over the space-time soil moisture field. The jitter is a multiplicative noise acting on the soil moisture dynamics with the objective to deflate its correlation structure at small spatial scales which are not embedded in the probabilistic structure of the rainfall process that drives the dynamics. These scales of order of several meters to several hundred meters are of great importance in ecohydrologic dynamics. Properties of space-time correlation functions and spectral densities of the model with jitter are explored analytically, and the influence of the jitter parameters, reflecting variabilities of soil moisture at different spatial and temporal scales, is investigated. A case study fitting the derived model to a soil moisture dataset is presented in detail.

  15. Time series modeling for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Mandl Kenneth D

    2003-01-01

    Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool

  16. Determination of the mass transfer limiting step of dye adsorption onto commercial adsorbent by using mathematical models.

    Science.gov (United States)

    Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov

    2014-01-01

    Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.

  17. Outlier Detection in Structural Time Series Models

    DEFF Research Database (Denmark)

    Marczak, Martyna; Proietti, Tommaso

    investigate via Monte Carlo simulations how this approach performs for detecting additive outliers and level shifts in the analysis of nonstationary seasonal time series. The reference model is the basic structural model, featuring a local linear trend, possibly integrated of order two, stochastic seasonality......Structural change affects the estimation of economic signals, like the underlying growth rate or the seasonally adjusted series. An important issue, which has attracted a great deal of attention also in the seasonal adjustment literature, is its detection by an expert procedure. The general......–to–specific approach to the detection of structural change, currently implemented in Autometrics via indicator saturation, has proven to be both practical and effective in the context of stationary dynamic regression models and unit–root autoregressions. By focusing on impulse– and step–indicator saturation, we...

  18. Spectrum of Slip Processes on the Subduction Interface in a Continuum Framework Resolved by Rate-and State Dependent Friction and Adaptive Time Stepping

    Science.gov (United States)

    Herrendoerfer, R.; van Dinther, Y.; Gerya, T.

    2015-12-01

    To explore the relationships between subduction dynamics and the megathrust earthquake potential, we have recently developed a numerical model that bridges the gap between processes on geodynamic and earthquake cycle time scales. In a self-consistent, continuum-based framework including a visco-elasto-plastic constitutive relationship, cycles of megathrust earthquake-like ruptures were simulated through a purely slip rate-dependent friction, albeit with very low slip rates (van Dinther et al., JGR, 2013). In addition to much faster earthquakes, a range of aseismic slip processes operate at different time scales in nature. These aseismic processes likely accommodate a considerable amount of the plate convergence and are thus relevant in order to estimate the long-term seismic coupling and related hazard in subduction zones. To simulate and resolve this wide spectrum of slip processes, we innovatively implemented rate-and state dependent friction (RSF) and an adaptive time-stepping into our continuum framework. The RSF formulation, in contrast to our previous friction formulation, takes the dependency of frictional strength on a state variable into account. It thereby allows for continuous plastic yielding inside rate-weakening regions, which leads to aseismic slip. In contrast to the conventional RSF formulation, we relate slip velocities to strain rates and use an invariant formulation. Thus we do not require the a priori definition of infinitely thin, planar faults in a homogeneous elastic medium. With this new implementation of RSF, we succeed to produce consistent cycles of frictional instabilities. By changing the frictional parameter a, b, and the characteristic slip distance, we observe a transition from stable sliding to stick-slip behaviour. This transition is in general agreement with predictions from theoretical estimates of the nucleation size, thereby to first order validating our implementation. By incorporating adaptive time-stepping based on a

  19. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    Science.gov (United States)

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  1. Modeling of X-ray emissions produced by stepping lightning leaders

    OpenAIRE

    Xu , Wei; Celestin , Sebastien; Pasko , Victor P.

    2014-01-01

    International audience; Intense and brief bursts of X-ray emissions have been measured during the stepping processof both natural cloud-to-ground (CG) and rocket-triggered lightning flashes. In this paper, we investigatetheoretically the energy spectra of X-rays produced by the bremsstrahlung emission of thermal runawayelectrons accelerated in the inhomogeneous electric field produced around lightning leader tips. The X-rayenergy spectrum depends on the physical properties of the associated l...

  2. Mixed models for data from thorough QT studies: part 2. One-step assessment of conditional QT prolongation.

    Science.gov (United States)

    Schall, Robert

    2011-01-01

    We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation. Copyright © 2010 John Wiley & Sons, Ltd.

  3. Designers workbench: toward real-time immersive modeling

    Science.gov (United States)

    Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu

    2000-05-01

    This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.

  4. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  5. Gas flushing through hyper-acidic crater lakes: the next steps within a reframed monitoring time window

    Science.gov (United States)

    Rouwet, Dmitri

    2016-04-01

    evaporative degassing plumes can be useful as monitoring tool on the short-term, but only if the underlying process of gas flushing through acidic lakes is better understood, and linked with the lake water chemistry; (2) The second method forgets about chemical kinetics, degassing models and dynamics of phreatic eruptions, and sticks to the classical principle in geology of "the past is the key for the future". How did lake chemistry parameters vary during the various stages of unrest and eruption, on a purely mathematical basis? Can we recognise patterns in the numerical values related to the changes in volcanic activity? Water chemistry only as a monitoring tool for extremely dynamic and erupting crater lake systems, is inefficient in revealing short-term precursors for single phreatic eruptions, within the current perspective of the residence time dependent monitoring time window. The monitoring rules established since decades based only on water chemistry have thus somehow become obsolete and need revision.

  6. The manifold model for space-time

    International Nuclear Information System (INIS)

    Heller, M.

    1981-01-01

    Physical processes happen on a space-time arena. It turns out that all contemporary macroscopic physical theories presuppose a common mathematical model for this arena, the so-called manifold model of space-time. The first part of study is an heuristic introduction to the concept of a smooth manifold, starting with the intuitively more clear concepts of a curve and a surface in the Euclidean space. In the second part the definitions of the Csub(infinity) manifold and of certain structures, which arise in a natural way from the manifold concept, are given. The role of the enveloping Euclidean space (i.e. of the Euclidean space appearing in the manifold definition) in these definitions is stressed. The Euclidean character of the enveloping space induces to the manifold local Euclidean (topological and differential) properties. A suggestion is made that replacing the enveloping Euclidean space by a discrete non-Euclidean space would be a correct way towards the quantization of space-time. (author)

  7. RTMOD: Real-Time MODel evaluation

    International Nuclear Information System (INIS)

    Graziani, G; Galmarini, S.; Mikkelsen, T.

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  8. Characterization of olive oil volatiles by multi-step direct thermal desorption-comprehensive gas chromatography-time-of-flight mass spectrometry using a programmed temperature vaporizing injector

    NARCIS (Netherlands)

    de Koning, S.; Kaal, E.; Janssen, H.-G.; van Platerink, C.; Brinkman, U.A.Th.

    2008-01-01

    The feasibility of a versatile system for multi-step direct thermal desorption (DTD) coupled to comprehensive gas chromatography (GC × GC) with time-of-flight mass spectrometric (TOF-MS) detection is studied. As an application the system is used for the characterization of fresh versus aged olive

  9. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    Directory of Open Access Journals (Sweden)

    Mona Hassan Aburahma

    2015-07-01

    Full Text Available Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated.

  10. Emergency Department Telepsychiatry Service Model for a Rural Regional Health System: The First Steps.

    Science.gov (United States)

    Meyer, James D; McKean, Alastair J S; Blegen, Rebecca N; Demaerschalk, Bart M

    2018-05-09

    Emergency departments (EDs) have recognized an increasing number of patients presenting with mental health (MH) concerns. This trend imposes greater demands upon EDs already operating at capacity. Many ED providers do not feel they are optimally prepared to provide the necessary MH care. One consideration in response to this dilemma is to use advanced telemedicine technology for psychiatric consultation. We examined a rural- and community-based health system operating 21 EDs, none of which has direct access to psychiatric consultation. Dedicated beds to MH range from zero (in EDs with only 3 beds) to 6 (in an ED with 38 beds). We conducted a needs assessment of this health system. This included a survey of emergency room providers with a 67% response rate and site visits to directly observe patient flow and communication with ED staff. A visioning workshop provided input from ED staff. Data were also obtained, which reflected ED admissions for the year 2015. The data provide a summary of provider concerns, a summary of MH presentations and diagnosis, and age groupings. The data also provide a time when most MH concerns present to the ED. Based upon these results, a proposed model for delivering comprehensive regional emergency telepsychiatry and behavioral health services is proposed. Emergency telepsychiatry services may be a tenable solution for addressing the shortage of psychiatric consultation to EDs in light of increasing demand for MH treatment in the ED.

  11. Sensitivity of microwave ablation models to tissue biophysical properties: A first step toward probabilistic modeling and treatment planning.

    Science.gov (United States)

    Sebek, Jan; Albin, Nathan; Bortel, Radoslav; Natarajan, Bala; Prakash, Punit

    2016-05-01

    Computational models of microwave ablation (MWA) are widely used during the design optimization of novel devices and are under consideration for patient-specific treatment planning. The objective of this study was to assess the sensitivity of computational models of MWA to tissue biophysical properties. The Morris method was employed to assess the global sensitivity of the coupled electromagnetic-thermal model, which was implemented with the finite element method (FEM). The FEM model incorporated temperature dependencies of tissue physical properties. The variability of the model was studied using six different outputs to characterize the size and shape of the ablation zone, as well as impedance matching of the ablation antenna. Furthermore, the sensitivity results were statistically analyzed and absolute influence of each input parameter was quantified. A framework for systematically incorporating model uncertainties for treatment planning was suggested. A total of 1221 simulations, incorporating 111 randomly sampled starting points, were performed. Tissue dielectric parameters, specifically relative permittivity, effective conductivity, and the threshold temperature at which they transitioned to lower values (i.e., signifying desiccation), were identified as the most influential parameters for the shape of the ablation zone and antenna impedance matching. Of the thermal parameters considered in this study, the nominal blood perfusion rate and the temperature interval across which the tissue changes phase were identified as the most influential. The latent heat of tissue water vaporization and the volumetric heat capacity of the vaporized tissue were recognized as the least influential parameters. Based on the evaluation of absolute changes, the most important parameter (perfusion) had approximately 40.23 times greater influence on ablation area than the least important parameter (volumetric heat capacity of vaporized tissue). Another significant input parameter

  12. RTMOD: Real-Time MODel evaluation

    DEFF Research Database (Denmark)

    Graziani, G.; Galmarini, S.; Mikkelsen, Torben

    2000-01-01

    the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results....... At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... during the ETEX exercises suggested the development of this project. RTMOD featured a web-baseduser-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration...

  13. Time Modeling: Salvatore Sciarrino, Windows and Beclouding

    Directory of Open Access Journals (Sweden)

    Acácio Tadeu de Camargo Piedade

    2017-08-01

    Full Text Available In this article I intend to discuss one of the figures created by the Italian composer Salvatore Sciarrino: the windowed form. After the composer's explanation of this figure, I argue that windows in composition can open inwards and outwards the musical discourse. On one side, they point to the composition's inner ambiences and constitute an internal remission. On the other, they instigate the audience to comprehend the external reference, thereby constructing intertextuality. After the outward window form, I will consider some techniques of distortion, particularly one that I call beclouding. To conclude, I will comment the question of memory and of compostition as time modeling.

  14. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    OpenAIRE

    Tudor-Locke, Catrine; Craig, Cora L; Cameron, Christine; Griffiths, Joseph M

    2011-01-01

    Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI) and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years) were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents...

  15. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    Science.gov (United States)

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  16. Global model of the upper atmosphere with a variable step of integration in latitude

    International Nuclear Information System (INIS)

    Namgaladze, A.A.; Martynenko, O.V.; Namgaladze, A.N.

    1996-01-01

    New version of model for the Earth thermosphere, ionosphere and protonosphere with increased spatial distribution, realized at personal computer, is developed. Numerical solution algorithm for modeling equations solution, which makes it possible to apply variable (depending on latitude) integrating pitch by latitude and to increase hereby the model latitude resolutions in the latitude zones of interest. Comparison of the model calculational results of ionosphere and thermosphere parameters, accomplished with application of different integrating pitches by geomagnetic latitude, is conducted. 10 refs.; 3 figs

  17. Qualitative validation of humanoid robot models through balance recovery side-stepping experiments

    NARCIS (Netherlands)

    Assman, T.M.; Zutven, van P.W.M.; Nijmeijer, H.

    2013-01-01

    Different models are used in literature to approximate the complex dynamics of a humanoid robot. Many models use strongly varying model assumptions that neglect the influence of feet, discontinuous ground impact, internal dynamics and coupling between the 3D coronal and sagittal plane dynamics.

  18. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  19. The electrical resistivity of rough thin films: A model based on electron reflection at discrete step edges

    Science.gov (United States)

    Zhou, Tianji; Zheng, Pengyuan; Pandey, Sumeet C.; Sundararaman, Ravishankar; Gall, Daniel

    2018-04-01

    The effect of the surface roughness on the electrical resistivity of metallic thin films is described by electron reflection at discrete step edges. A Landauer formalism for incoherent scattering leads to a parameter-free expression for the resistivity contribution from surface mound-valley undulations that is additive to the resistivity associated with bulk and surface scattering. In the classical limit where the electron reflection probability matches the ratio of the step height h divided by the film thickness d, the additional resistivity Δρ = √{3 /2 } /(g0d) × ω/ξ, where g0 is the specific ballistic conductance and ω/ξ is the ratio of the root-mean-square surface roughness divided by the lateral correlation length of the surface morphology. First-principles non-equilibrium Green's function density functional theory transport simulations on 1-nm-thick Cu(001) layers validate the model, confirming that the electron reflection probability is equal to h/d and that the incoherent formalism matches the coherent scattering simulations for surface step separations ≥2 nm. Experimental confirmation is done using 4.5-52 nm thick epitaxial W(001) layers, where ω = 0.25-1.07 nm and ξ = 10.5-21.9 nm are varied by in situ annealing. Electron transport measurements at 77 and 295 K indicate a linear relationship between Δρ and ω/(ξd), confirming the model predictions. The model suggests a stronger resistivity size effect than predictions of existing models by Fuchs [Math. Proc. Cambridge Philos. Soc. 34, 100 (1938)], Sondheimer [Adv. Phys. 1, 1 (1952)], Rossnagel and Kuan [J. Vac. Sci. Technol., B 22, 240 (2004)], or Namba [Jpn. J. Appl. Phys., Part 1 9, 1326 (1970)]. It provides a quantitative explanation for the empirical parameters in these models and may explain the recently reported deviations of experimental resistivity values from these models.

  20. A novel enterovirus and parechovirus multiplex one-step real-time PCR-validation and clinical experience

    DEFF Research Database (Denmark)

    Nielsen, A. C. Y.; Bottiger, B.; Midgley, S. E.

    2013-01-01

    As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay....... The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel...... testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses...

  1. Adaptive back-stepping control of the harmonic drive system with LuGre model-based friction compensation

    Science.gov (United States)

    Liu, Sen; Gang, Tieqiang

    2018-03-01

    Harmonic drives are widely used in aerospace and industrial robots. Flexibility, friction and parameter uncertainty will result in transmission performance degradation. In this paper, an adaptive back-stepping method with friction compensation is proposed to improve the tracking performance of the harmonic drive system. The nonlinear friction is described by LuGre model and compensated with a friction observer, and the uncertainty of model parameters is resolved by adaptive parameter estimation method. By using Lyapunov stability theory, it is proved that all the errors of the closed-loop system are uniformly ultimately bounded. Simulations illustrate the effectiveness of our friction compensation method.

  2. Simulation study of multi-step model algorithmic control of the nuclear reactor thermal power tracking system

    International Nuclear Information System (INIS)

    Shi Xiaoping; Xu Tianshu

    2001-01-01

    The classical control method is usually hard to ensure the thermal power tracking accuracy, because the nuclear reactor system is a complex nonlinear system with uncertain parameters and disturbances. A sort of non-parameter model is constructed with the open-loop impulse response of the system. Furthermore, a sort of thermal power tracking digital control law is presented using the multi-step model algorithmic control principle. The control method presented had good tracking performance and robustness. It can work despite the existence of unmeasurable disturbances. The simulation experiment testifies the correctness and effectiveness of the method. The high accuracy matching between the thermal power and the referenced load is achieved

  3. A One-Step-Ahead Smoothing-Based Joint Ensemble Kalman Filter for State-Parameter Estimation of Hydrological Models

    KAUST Repository

    El Gharamti, Mohamad

    2015-11-26

    The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation strategy. In this study, we introduce a new smoothing-based joint EnKF scheme, in which we introduce a one-step-ahead smoothing of the state before updating the parameters. Numerical experiments are performed with a two-dimensional synthetic subsurface contaminant transport model. The improved performance of the proposed joint EnKF scheme compared to the standard joint EnKF compensates for the modest increase in the computational cost.

  4. STeP: A Tool for the Development of Provably Correct Reactive and Real-Time Systems

    National Research Council Canada - National Science Library

    Manna, Zohar

    1999-01-01

    This research is directed towards the implementation of a comprehensive toolkit for the development and verification of high assurance reactive systems, especially concurrent, real time, and hybrid systems...

  5. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  6. Modeling of the steam hydrolysis in a two-step process for hydrogen production by solar concentrated energy

    Science.gov (United States)

    Valle-Hernández, Julio; Romero-Paredes, Hernando; Pacheco-Reyes, Alejandro

    2017-06-01

    In this paper the simulation of the steam hydrolysis for hydrogen production through the decomposition of cerium oxide is presented. The thermochemical cycle for hydrogen production consists of the endothermic reduction of CeO2 to lower-valence cerium oxide, at high temperature, where concentrated solar energy is used as a source of heat; and of the subsequent steam hydrolysis of the resulting cerium oxide to produce hydrogen. The modeling of endothermic reduction step was presented at the Solar Paces 2015. This work shows the modeling of the exothermic step; the hydrolysis of the cerium oxide (III) to form H2 and the corresponding initial cerium oxide made at lower temperature inside the solar reactor. For this model, three sections of the pipe where the reaction occurs were considered; the steam water inlet, the porous medium and the hydrogen outlet produced. The mathematical model describes the fluid mechanics; mass and energy transfer occurring therein inside the tungsten pipe. Thermochemical process model was simulated in CFD. The results show a temperature distribution in the solar reaction pipe and allow obtaining the fluid dynamics and the heat transfer within the pipe. This work is part of the project "Solar Fuels and Industrial Processes" from the Mexican Center for Innovation in Solar Energy (CEMIE-Sol).

  7. Statistical modeling of tear strength for one step fixation process of reactive printing and easy care finishing

    International Nuclear Information System (INIS)

    Asim, F.; Mahmood, M.

    2017-01-01

    Statistical modeling imparts significant role in predicting the impact of potential factors affecting the one step fixation process of reactive printing and easy care finishing. Investigation of significant factors on tear strength of cotton fabric for single step fixation of reactive printing and easy care finishing has been carried out in this research work using experimental design technique. The potential design factors were; concentration of reactive dye, concentration of crease resistant, fixation method and fixation temperature. The experiments were designed using DoE (Design of Experiment) and analyzed through software Design Expert. The detailed analysis of significant factors and interactions including ANOVA (Analysis of Variance), residuals, model accuracy and statistical model for tear strength has been presented. The interaction and contour plots of vital factors has been examined. It has been found from the statistical analysis that each factor has an interaction with other factor. Most of the investigated factors showed curvature effect on other factor. After critical examination of significant plots, quadratic model of tear strength with significant terms and their interaction at alpha = 0.05 has been developed. The calculated correlation coefficient, R2 of the developed model is 0.9056. The high values of correlation coefficient inferred that developed equation of tear strength will precisely predict the tear strength over the range of values. (author)

  8. The First Step in Prison Training Program Evaluation: A Model for Pinpointing Critical Needs

    Science.gov (United States)

    Vicino, Frank L.; And Others

    1977-01-01

    The model allows for the determination of the severity of the training problems, thus leading naturally to problem priority and in addition a basis for evaluation design. Results from the use of the model, in an operational setting, are presented. (Author)

  9. Prenatal cleft lip and maxillary alveolar defect repair in a 2-step fetal lamb model.

    NARCIS (Netherlands)

    Wenghoefer, M.H.; Deprest, J.; Goetz, W.; Kuijpers-Jagtman, A.M.; Bergé, S.J.

    2007-01-01

    PURPOSE: As there is no satisfying animal model simulating the complex cleft lip and palate anatomy in a standardized defect on one hand, and comprising the possibilities for extensive surgical procedures on the other hand, an improved fetal lamb model for cleft surgery was developed. MATERIALS AND

  10. Continuous Time Dynamic Contraflow Models and Algorithms

    Directory of Open Access Journals (Sweden)

    Urmila Pyakurel

    2016-01-01

    Full Text Available The research on evacuation planning problem is promoted by the very challenging emergency issues due to large scale natural or man-created disasters. It is the process of shifting the maximum number of evacuees from the disastrous areas to the safe destinations as quickly and efficiently as possible. Contraflow is a widely accepted model for good solution of evacuation planning problem. It increases the outbound road capacity by reversing the direction of roads towards the safe destination. The continuous dynamic contraflow problem sends the maximum number of flow as a flow rate from the source to the sink in every moment of time unit. We propose the mathematical model for the continuous dynamic contraflow problem. We present efficient algorithms to solve the maximum continuous dynamic contraflow and quickest continuous contraflow problems on single source single sink arbitrary networks and continuous earliest arrival contraflow problem on single source single sink series-parallel networks with undefined supply and demand. We also introduce an approximation solution for continuous earliest arrival contraflow problem on two-terminal arbitrary networks.

  11. Development of good modelling practice for phsiologically based pharmacokinetic models for use in risk assessment: The first steps

    Science.gov (United States)

    The increasing use of tissue dosimetry estimated using pharmacokinetic models in chemical risk assessments in multiple countries necessitates the need to develop internationally recognized good modelling practices. These practices would facilitate sharing of models and model eva...

  12. Grey-box Modeling for System Identification of Household Refrigerators: a Step Toward Smart Appliances

    DEFF Research Database (Denmark)

    Costanzo, Giuseppe Tommaso; Sossan, Fabrizio; Marinelli, Mattia

    2013-01-01

    units, which operation can be shifted within temperature and operational constraints. Even if the refrigerators are not intended to be used as smart loads, validated models are useful in predicting units consumption. This information can increase the optimality of the management of other flexible units......This paper presents the grey-box modeling of a vapor-compression refrigeration system for residential applications based on maximum likelihood estimation of parameters in stochastic differential equations. Models obtained are useful in the view of controlling refrigerators as flexible consumption...

  13. A novel enterovirus and parechovirus multiplex one-step real-time PCR-validation and clinical experience.

    Science.gov (United States)

    Nielsen, Alex Christian Yde; Böttiger, Blenda; Midgley, Sofie Elisabeth; Nielsen, Lars Peter

    2013-11-01

    As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay. The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses in 171 (8%) and 66 (3%) of the samples, respectively. 180 of the positive samples could be genotyped by PCR and sequencing and the most common genotypes found were human parechovirus type 3, echovirus 9, enterovirus 71, Coxsackievirus A16, and echovirus 25. During 2009 in Denmark, both enterovirus and human parechovirus type 3 had a similar seasonal pattern with a peak during the summer and autumn. Human parechovirus type 3 was almost invariably found in children less than 4 months of age. In conclusion, a multiplex assay was developed allowing simultaneous detection of 2 viruses, which can cause similar clinical symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    DEFF Research Database (Denmark)

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...

  15. Integration into Big Data: First Steps to Support Reuse of Comprehensive Toxicity Model Modules (SOT)

    Science.gov (United States)

    Data surrounding the needs of human disease and toxicity modeling are largely siloed limiting the ability to extend and reuse modules across knowledge domains. Using an infrastructure that supports integration across knowledge domains (animal toxicology, high-throughput screening...

  16. Road traffic impact on urban water quality: a step towards integrated traffic, air and stormwater modelling.

    Science.gov (United States)

    Fallah Shorshani, Masoud; Bonhomme, Céline; Petrucci, Guido; André, Michel; Seigneur, Christian

    2014-04-01

    Methods for simulating air pollution due to road traffic and the associated effects on stormwater runoff quality in an urban environment are examined with particular emphasis on the integration of the various simulation models into a consistent modelling chain. To that end, the models for traffic, pollutant emissions, atmospheric dispersion and deposition, and stormwater contamination are reviewed. The present study focuses on the implementation of a modelling chain for an actual urban case study, which is the contamination of water runoff by cadmium (Cd), lead (Pb), and zinc (Zn) in the Grigny urban catchment near Paris, France. First, traffic emissions are calculated with traffic inputs using the COPERT4 methodology. Next, the atmospheric dispersion of pollutants is simulated with the Polyphemus line source model and pollutant deposition fluxes in different subcatchment areas are calculated. Finally, the SWMM water quantity and quality model is used to estimate the concentrations of pollutants in stormwater runoff. The simulation results are compared to mass flow rates and concentrations of Cd, Pb and Zn measured at the catchment outlet. The contribution of local traffic to stormwater contamination is estimated to be significant for Pb and, to a lesser extent, for Zn and Cd; however, Pb is most likely overestimated due to outdated emissions factors. The results demonstrate the importance of treating distributed traffic emissions from major roadways explicitly since the impact of these sources on concentrations in the catchment outlet is underestimated when those traffic emissions are spatially averaged over the catchment area.

  17. Assimilation of LAI time-series in crop production models

    Science.gov (United States)

    Kooistra, Lammert; Rijk, Bert; Nannes, Louis

    2014-05-01

    Agriculture is worldwide a large consumer of freshwater, nutrients and land. Spatial explicit agricultural management activities (e.g., fertilization, irrigation) could significantly improve efficiency in resource use. In previous studies and operational applications, remote sensing has shown to be a powerful method for spatio-temporal monitoring of actual crop status. As a next step, yield forecasting by assimilating remote sensing based plant variables in crop production models would improve agricultural decision support both at the farm and field level. In this study we investigated the potential of remote sensing based Leaf Area Index (LAI) time-series assimilated in the crop production model LINTUL to improve yield forecasting at field level. The effect of assimilation method and amount of assimilated observations was evaluated. The LINTUL-3 crop production model was calibrated and validated for a potato crop on two experimental fields in the south of the Netherlands. A range of data sources (e.g., in-situ soil moisture and weather sensors, destructive crop measurements) was used for calibration of the model for the experimental field in 2010. LAI from cropscan field radiometer measurements and actual LAI measured with the LAI-2000 instrument were used as input for the LAI time-series. The LAI time-series were assimilated in the LINTUL model and validated for a second experimental field on which potatoes were grown in 2011. Yield in 2011 was simulated with an R2 of 0.82 when compared with field measured yield. Furthermore, we analysed the potential of assimilation of LAI into the LINTUL-3 model through the 'updating' assimilation technique. The deviation between measured and simulated yield decreased from 9371 kg/ha to 8729 kg/ha when assimilating weekly LAI measurements in the LINTUL model over the season of 2011. LINTUL-3 furthermore shows the main growth reducing factors, which are useful for farm decision support. The combination of crop models and sensor

  18. A new heat transfer analysis in machining based on two steps of 3D finite element modelling and experimental validation

    Science.gov (United States)

    Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.

    2013-01-01

    Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.

  19. Effect of different air-drying time on the microleakage of single-step self-etch adhesives

    OpenAIRE

    Moosavi, Horieh; Forghani, Maryam; Managhebi, Esmatsadat

    2013-01-01

    Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO), Clearfil S3 Bond (CSB), Bond Force (BF). Each main group divided into three subgroups regarding the air-drying time: without application of air stream...

  20. Maximizing time from the constraining European Working Time Directive (EWTD): The Heidelberg New Working Time Model.

    Science.gov (United States)

    Schimmack, Simon; Hinz, Ulf; Wagner, Andreas; Schmidt, Thomas; Strothmann, Hendrik; Büchler, Markus W; Schmitz-Winnenthal, Hubertus

    2014-01-01

    The introduction of the European Working Time Directive (EWTD) has greatly reduced training hours of surgical residents, which translates into 30% less surgical and clinical experience. Such a dramatic drop in attendance has serious implications such compromised quality of medical care. As the surgical department of the University of Heidelberg, our goal was to establish a model that was compliant with the EWTD while avoiding reduction in quality of patient care and surgical training. We first performed workload analyses and performance statistics for all working areas of our department (operation theater, emergency room, specialized consultations, surgical wards and on-call duties) using personal interviews, time cards, medical documentation software as well as data of the financial- and personnel-controlling sector of our administration. Using that information, we specifically designed an EWTD-compatible work model and implemented it. Surgical wards and operating rooms (ORs) were not compliant with the EWTD. Between 5 pm and 8 pm, three ORs were still operating two-thirds of the time. By creating an extended work shift (7:30 am-7:30 pm), we effectively reduced the workload to less than 49% from 4 pm and 8 am, allowing the combination of an eight-hour working day with a 16-hour on call duty; thus, maximizing surgical resident training and ensuring patient continuity of care while maintaining EDTW guidelines. A precise workload analysis is the key to success. The Heidelberg New Working Time Model provides a legal model, which, by avoiding rotating work shifts, assures quality of patient care and surgical training.

  1. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  2. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  3. Time will tell: community acceptability of HIV vaccine research before and after the “Step Study” vaccine discontinuation

    Directory of Open Access Journals (Sweden)

    Paula M Frew

    2010-09-01

    Full Text Available Paula M Frew1,2,3,4, Mark J Mulligan1,2,3, Su-I Hou5, Kayshin Chan3, Carlos del Rio1,2,3,61Department of Medicine, Division of Infectious Diseases, Emory University School of Medicine, Atlanta, Georgia, USA; 2Emory Center for AIDS Research, Atlanta, Georgia, USA; 3The Hope Clinic of the Emory Vaccine Center, Decatur, Georgia, USA; 4Department of Behavioral Sciences and Health Education, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA; 5Department of Health Promotion and Behavior, College of Public Health, University of Georgia, Athens, Georgia, USA; 6Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, Georgia, USAObjective: This study examines whether men-who-have-sex-with-men (MSM and transgender (TG persons’ attitudes, beliefs, and risk perceptions toward human immunodeficiency virus (HIV vaccine research have been altered as a result of the negative findings from a phase 2B HIV vaccine study.Design: We conducted a cross-sectional survey among MSM and TG persons (N = 176 recruited from community settings in Atlanta from 2007 to 2008. The first group was recruited during an active phase 2B HIV vaccine trial in which a candidate vaccine was being evaluated (the “Step Study”, and the second group was recruited after product futility was widely reported in the media.Methods: Descriptive statistics, t tests, and chi-square tests were conducted to ascertain differences between the groups, and ordinal logistic regressions examined the influences of the above-mentioned factors on a critical outcome, future HIV vaccine study participation. The ordinal regression outcomes evaluated the influences on disinclination, neutrality, and inclination to study participation.Results: Behavioral outcomes such as future recruitment, event attendance, study promotion, and community mobilization did not reveal any differences in participants’ intentions between the groups. However, we observed

  4. Modeling Coastal Vulnerability through Space and Time.

    Science.gov (United States)

    Hopper, Thomas; Meixler, Marcia S

    2016-01-01

    Coastal ecosystems experience a wide range of stressors including wave forces, storm surge, sea-level rise, and anthropogenic modification and are thus vulnerable to erosion. Urban coastal ecosystems are especially important due to the large populations these limited ecosystems serve. However, few studies have addressed the issue of urban coastal vulnerability at the landscape scale with spatial data that are finely resolved. The purpose of this study was to model and map coastal vulnerability and the role of natural habitats in reducing vulnerability in Jamaica Bay, New York, in terms of nine coastal vulnerability metrics (relief, wave exposure, geomorphology, natural habitats, exposure, exposure with no habitat, habitat role, erodible shoreline, and surge) under past (1609), current (2015), and future (2080) scenarios using InVEST 3.2.0. We analyzed vulnerability results both spatially and across all time periods, by stakeholder (ownership) and by distance to damage from Hurricane Sandy. We found significant differences in vulnerability metrics between past, current and future scenarios for all nine metrics except relief and wave exposure. The marsh islands in the center of the bay are currently vulnerable. In the future, these islands will likely be inundated, placing additional areas of the shoreline increasingly at risk. Significant differences in vulnerability exist between stakeholders; the Breezy Point Cooperative and Gateway National Recreation Area had the largest erodible shoreline segments. Significant correlations exist for all vulnerability (exposure/surge) and storm damage combinations except for exposure and distance to artificial debris. Coastal protective features, ranging from storm surge barriers and levees to natural features (e.g. wetlands), have been promoted to decrease future flood risk to communities in coastal areas around the world. Our methods of combining coastal vulnerability results with additional data and across multiple time

  5. Comprehension of Multiple Documents with Conflicting Information: A Two-Step Model of Validation

    Science.gov (United States)

    Richter, Tobias; Maier, Johanna

    2017-01-01

    In this article, we examine the cognitive processes that are involved when readers comprehend conflicting information in multiple texts. Starting from the notion of routine validation during comprehension, we argue that readers' prior beliefs may lead to a biased processing of conflicting information and a one-sided mental model of controversial…

  6. The next step in coastal numerical models: spectral/hp element methods?

    DEFF Research Database (Denmark)

    Eskilsson, Claes; Engsig-Karup, Allan Peter; Sherwin, Spencer J.

    2005-01-01

    In this paper we outline the application of spectral/hp element methods for modelling nonlinear and dispersive waves. We present one- and two-dimensional test cases for the shallow water equations and Boussinesqtype equations – including highly dispersive Boussinesq-type equations....

  7. Accessing key steps of human tumor progression in vivo by using an avian embryo model

    Science.gov (United States)

    Hagedorn, Martin; Javerzat, Sophie; Gilges, Delphine; Meyre, Aurélie; de Lafarge, Benjamin; Eichmann, Anne; Bikfalvi, Andreas

    2005-02-01

    Experimental in vivo tumor models are essential for comprehending the dynamic process of human cancer progression, identifying therapeutic targets, and evaluating antitumor drugs. However, current rodent models are limited by high costs, long experimental duration, variability, restricted accessibility to the tumor, and major ethical concerns. To avoid these shortcomings, we investigated whether tumor growth on the chick chorio-allantoic membrane after human glioblastoma cell grafting would replicate characteristics of the human disease. Avascular tumors consistently formed within 2 days, then progressed through vascular endothelial growth factor receptor 2-dependent angiogenesis, associated with hemorrhage, necrosis, and peritumoral edema. Blocking of vascular endothelial growth factor receptor 2 and platelet-derived growth factor receptor signaling pathways by using small-molecule receptor tyrosine kinase inhibitors abrogated tumor development. Gene regulation during the angiogenic switch was analyzed by oligonucleotide microarrays. Defined sample selection for gene profiling permitted identification of regulated genes whose functions are associated mainly with tumor vascularization and growth. Furthermore, expression of known tumor progression genes identified in the screen (IL-6 and cysteine-rich angiogenic inducer 61) as well as potential regulators (lumican and F-box-only 6) follow similar patterns in patient glioma. The model reliably simulates key features of human glioma growth in a few days and thus could considerably increase the speed and efficacy of research on human tumor progression and preclinical drug screening. angiogenesis | animal model alternatives | glioblastoma

  8. First steps towards a state classification in the random-field Ising model

    International Nuclear Information System (INIS)

    Basso, Vittorio; Magni, Alessandro; Bertotti, Giorgio

    2006-01-01

    The properties of locally stable states of the random-field Ising model are studied. A map is defined for the dynamics driven by the field starting from a locally stable state. The fixed points of the map are connected with the limit hysteresis loops that appear in the classification of the states

  9. Modeling heat and mass transfer in the heat treatment step of yerba maté processing

    Directory of Open Access Journals (Sweden)

    J. M. Peralta

    2007-03-01

    Full Text Available The aim of this research was to estimate the leaf and twig temperature and moisture content of yerba maté branches (Ilex paraguariensis Saint Hilaire during heat treatment, carried out in a rotary kiln dryer. These variables had to be estimated (modeling the heat and mass transfer due to the difficulty of experimental measurement in the dryer. For modeling, the equipment was divided into two zones: the flame or heat treatment zone and the drying zone. The model developed fit well with the experimental data when water loss took place only in leaves. In the first zone, leaf temperature increased until it reached 135°C and then it slowly decreased to 88°C at the exit, despite the gas temperature, which varied in this zone from 460°C to 120°C. Twig temperature increased in the two zones from its inlet temperature (25°C up to 75°C. A model error of about 3% was estimated based on theoretical and experimental data on leaf moisture content.

  10. A Theory of Interest Rate Stepping : Inflation Targeting in a Dynamic Menu Cost Model

    NARCIS (Netherlands)

    Eijffinger, S.C.W.; Schaling, E.; Verhagen, W.H.

    1999-01-01

    Abstract: A stylised fact of monetary policy making is that central banks do not immediately respond to new information but rather seem to prefer to wait until sufficient ‘evidence’ to warrant a change has accumulated. However, theoretical models of inflation targeting imply that an optimising

  11. A first-principles model for the freezing step in ice cream manufacture

    NARCIS (Netherlands)

    Dorneanu, B.; Bildea, C.S.; Girievink, J.; Bongers, P.M.M.; Jezowski, J.; Thullie, J.

    2009-01-01

    This contribution deals with the development of a first-principles model for ice cream formation in the freezing unit to support product design and plant operation. Conservation equations for the mass, energy and momentum, considering axial flow assumptions are taken into account. The distributed

  12. A Step beyond Univision Evaluation: Using a Systems Model of Performance Improvement.

    Science.gov (United States)

    Sleezer, Catherine M.; Zhang, Jiping; Gradous, Deane B.; Maile, Craig

    1999-01-01

    Examines three views of performance improvement--scientific management, instructional design, and systems thinking--each providing a unique view of performance improvement and specific roles for evaluation. Provides an integrated definition of performance and a synthesis model that encompasses the three views. (AEF)

  13. Success rate evaluation of clinical governance implementation in teaching hospitals in Kerman (Iran) based on nine steps of Karsh's model.

    Science.gov (United States)

    Vali, Leila; Mastaneh, Zahra; Mouseli, Ali; Kardanmoghadam, Vida; Kamali, Sodabeh

    2017-07-01

    One of the ways to improve the quality of services in the health system is through clinical governance. This method aims to create a framework for clinical services providers to be accountable in return for continuing improvement of quality and maintaining standards of services. To evaluate the success rate of clinical governance implementation in Kerman teaching hospitals based on 9 steps of Karsh's Model. This cross-sectional study was conducted in 2015 on 94 people including chief executive officers (CEOs), nursing managers, clinical governance managers and experts, head nurses and nurses. The required data were collected through a researcher-made questionnaire containing 38 questions with three-point Likert Scale (good, moderate, and weak). The Karsh's Model consists of nine steps including top management commitment to change, accountability for change, creating a structured approach for change, training, pilot implementation, communication, feedback, simulation, and end-user participation. Data analysis using descriptive statistics and Mann-Whitney-Wilcoxon test was done by SPSS software version 16. About 81.9 % of respondents were female and 74.5 have a Bachelor of Nursing (BN) degree. In general, the status of clinical governance implementation in studied hospitals based on 9 steps of the model was 44 % (moderate). A significant relationship was observed among accountability and organizational position (p=0.0012) and field of study (p=0.000). Also, there were significant relationships between structure-based approach and organizational position (p=0.007), communication and demographic characteristics (p=0.000), and end-user participation with organizational position (p=0.03). Clinical governance should be implemented by correct needs assessment and participation of all stakeholders, to ensure its enforcement in practice, and to enhance the quality of services.

  14. First steps towards modeling of ion-driven turbulence in Wendelstein 7-X

    Science.gov (United States)

    Warmer, F.; Xanthopoulos, P.; Proll, J. H. E.; Beidler, C. D.; Turkin, Y.; Wolf, R. C.

    2018-01-01

    Due to foreseen improvement of neoclassical confinement in optimised stellarators—like the newly commissioned Wendelstein 7-X (W7-X) experiment in Greifswald, Germany—it is expected that turbulence will significantly contribute to the heat and particle transport, thus posing a limit to the performance of such devices. In order to develop discharge scenarios, it is thus necessary to develop a model which could reliably capture the basic characteristics of turbulence and try to predict the levels thereof. The outcome will not only be affordable, using only a fraction of the computational cost which is normally required for repetitive direct turbulence simulations, but would also highlight important physics. In this model, we seek to describe the ion heat flux caused by ion temperature gradient (ITG) micro-turbulence, which, in certain heating scenarios, can be a strong source of free energy. With the aid of a relatively small number of state-of-the-art nonlinear gyrokinetic simulations, an initial critical gradient model (CGM) is devised, with the aim to replace an empirical model, stemming from observations in prior stellarator experiments. The novel CGM, in its present form, encapsulates all available knowledge about ion-driven 3D turbulence to date, also allowing for further important extensions, towards an accurate interpretation and prediction of the ‘anomalous’ transport. The CGM depends on the stiffness of the ITG turbulence scaling in W7-X, and implicitly includes the nonlinear zonal flow response. It is shown that the CGM is suitable for a 1D framework turbulence modeling.

  15. Bonding and Bridging Social Capital in Step- and First-Time Families and the Issue of Family Boundaries

    Directory of Open Access Journals (Sweden)

    Gaëlle Aeby

    2014-06-01

    Full Text Available Divorce and remarriage usually imply a redefinition of family boundaries, with consequences for the production and availability of social capital. This research shows that bonding and bridging social capitals are differentially made available by families. It first hypothesizes that bridging social capital is more likely to be developed in stepfamilies, and bonding social capital in first-time families. Second, the boundaries of family configurations are expected to vary within stepfamilies and within first-time families creating a diversity of family configurations within both structures. Third, in both cases, social capital is expected to depend on the ways in which their family boundaries are set up by individuals by including or excluding ex-partners, new partner's children, siblings, and other family ties. The study is based on a sample of 300 female respondents who have at least one child of their own between 5 and 13 years, 150 from a stepfamily structure and 150 from a first-time family structure. Social capital is empirically operationalized as perceived emotional support in family networks. The results show that individuals in first-time families more often develop bonding social capital and individuals in stepfamilies bridging social capital. In both cases, however, individuals in family configurations based on close blood and conjugal ties more frequently develop bonding social capital, whereas individuals in family configurations based on in-law, stepfamily or friendship ties are more likely to develop bridging social capital.

  16. On the Role of Chemical Kinetics Modeling in the LES of Premixed Bluff Body and Backward-Facing Step Combustors

    KAUST Repository

    Chakroun, Nadim W.

    2017-01-05

    Recirculating flows in the wake of a bluff body, behind a sudden expansion or down-stream of a swirler, are pivotal for anchoring a flame and expanding the stability range. The size and structure of these recirculation zones and the accurate prediction of the length of these zones is a very important characteristic that computational simulations should have. Large eddy simulation (LES) techniques with an appropriate combustion model and reaction mechanism afford a balance between computational complexity and predictive accuracy. In this study, propane/air mixtures were simulated in a bluff-body stabilized combustor based on the Volvo test case and also in a backward-facing step combustor. The main goal is to investigate the role of the chemical mechanism and the accuracy of estimating the extinction strain rate on the prediction of important ow features such as recirculation zones. Two 2-step mechanisms were employed, one which gave reasonable extinction strain rates and another modi ed 2-step mechanism where it grossly over-predicted the values. This modified mechanism under-predicted recirculation zone lengths compared to the original mechanism and had worse agreement with experiments in both geometries. While the recirculation zone lengths predicted by both reduced mechanisms in the step combustor scale linearly with the extinction strain rate, the scaling curves do not match experimental results as none of the simpli ed mechanisms produce extinction strain rates that are consistent with those predicted by the comprehensive mechanisms. We conclude that it is very important that a chemical mechanism is able to correctly predict extinction strain rates if it is to be used in CFD simulations.

  17. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    Science.gov (United States)

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  18. An Iterative Ensemble Kalman Filter with One-Step-Ahead Smoothing for State-Parameters Estimation of Contaminant Transport Models

    KAUST Repository

    Gharamti, M. E.; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model integrations than the standard Joint-EnKF.

  19. A step function model to evaluate the real monetary value of man-sievert with real GDP

    International Nuclear Information System (INIS)

    Na, Seong H.; Kim, Sun G.

    2009-01-01

    For use in a cost-benefit analysis to establish optimum levels of radiation protection in Korea under the ALARA principle, we introduce a discrete step function model to evaluate man-sievert monetary value in the real economic value. The model formula, which is unique and country-specific, is composed of real GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and average life expectancy. Unlike previous researches on alpha-value assessment, we show different alpha values in the real term, differentiated with respect to the range of individual doses, which would be more realistic and informative for application to the radiation protection practices. GDP deflators of economy can reflect the society's situations. Finally, we suggest that the Korean model can be generalized simply to other countries without normalizing any country-specific factors.

  20. A step function model to evaluate the real monetary value of man-sievert with real GDP

    Energy Technology Data Exchange (ETDEWEB)

    Na, Seong H. [Korea Institute of Nuclear Safety, 19 Guseong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of)], E-mail: shna@kins.re.kr; Kim, Sun G. [School of Business, Daejeon University, Yong Woon-dong, Dong-gu, Daejeon 300-716 (Korea, Republic of)], E-mail: sunkim@dju.ac.kr

    2009-07-15

    For use in a cost-benefit analysis to establish optimum levels of radiation protection in Korea under the ALARA principle, we introduce a discrete step function model to evaluate man-sievert monetary value in the real economic value. The model formula, which is unique and country-specific, is composed of real GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and average life expectancy. Unlike previous researches on alpha-value assessment, we show different alpha values in the real term, differentiated with respect to the range of individual doses, which would be more realistic and informative for application to the radiation protection practices. GDP deflators of economy can reflect the society's situations. Finally, we suggest that the Korean model can be generalized simply to other countries without normalizing any country-specific factors.

  1. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    Science.gov (United States)

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  2. A step function model to evaluate the real monetary value of man-sievert with real GDP.

    Science.gov (United States)

    Na, Seong H; Kim, Sun G

    2009-01-01

    For use in a cost-benefit analysis to establish optimum levels of radiation protection in Korea under the ALARA principle, we introduce a discrete step function model to evaluate man-sievert monetary value in the real economic value. The model formula, which is unique and country-specific, is composed of real GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and average life expectancy. Unlike previous researches on alpha-value assessment, we show different alpha values in the real term, differentiated with respect to the range of individual doses, which would be more realistic and informative for application to the radiation protection practices. GDP deflators of economy can reflect the society's situations. Finally, we suggest that the Korean model can be generalized simply to other countries without normalizing any country-specific factors.

  3. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species.

    Science.gov (United States)

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A

    2017-01-01

    Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  4. Microsoft Office professional 2010 step by step

    CERN Document Server

    Cox, Joyce; Frye, Curtis

    2011-01-01

    Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom

  5. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse

    2016-11-15

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers\\' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  6. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse; De Donato, Renato; Lensink, Marc F.; Petta, Andrea; Serra, Luigi; Scarano, Vittorio; Cavallo, Luigi; Oliva, Romina

    2016-01-01

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  7. Phd study of reliability and validity: One step closer to a standardized music therapy assessment model

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl

    The paper will present a phd study concerning reliability and validity of music therapy assessment model “Assessment of Parenting Competences” (APC) in the area of families with emotionally neglected children. This study had a multiple strategy design with a philosophical base of critical realism...... and pragmatism. The fixed design for this study was a between and within groups design in testing the APCs reliability and validity. The two different groups were parents with neglected children and parents with non-neglected children. The flexible design had a multiple case study strategy specifically...

  8. Modelling farm vulnerability to flooding: A step toward vulnerability mitigation policies appraisal

    Science.gov (United States)

    Brémond, P.; Abrami, G.; Blanc, C.; Grelot, F.

    2009-04-01

    flood. In the case of farm activities, vulnerability mitigation consists in implementing measures which can be: physical (equipment or electric power system elevation), organizational (emergency or recovery plan) or financial (insurance). These measures aim at decreasing the total damage incurred by farmers in case of flooding. For instance, if equipment is elevated, it will not suffer direct damage such as degradation. As a consequence, equipment will be available to continue production or recovery tasks, thus, avoiding indirect damage such as delays, indebtedness… The effects of these policies on farms, in particular vulnerability mitigation cannot be appraised using current methodologies mainly because they do not consider farm as a whole and focus on direct damage at the land plot scale (loss of yield). Moreover, since vulnerability mitigation policies are quite recent, few examples of implementation exist and no feedback experience can be processed. Meanwhile, decision makers and financial actors require more justification of the efficiency of public fund by economic appraisal of the projects. On the Rhône River, decision makers asked for an economic evaluation of the program of farm vulnerability mitigation they plan to implement. This implies to identify the effects of the measures to mitigate farm vulnerability, and to classify them by comparing their efficacy (avoided damage) and their cost of implementation. In this presentation, we propose and discuss a conceptual model of vulnerability at the farm scale. The modelling, in Unified Modelling Language, enabled to represent the ties between spatial, organizational and temporal dimensions, which are central to understanding of farm vulnerability and resilience to flooding. Through this modelling, we encompass three goals: To improve the comprehension of farm vulnerability and create a framework that allow discussion with experts of different disciplines as well as with local farmers; To identify data which

  9. Two-step Laser Time-of-Flight Mass Spectrometry to Elucidate Organic Diversity in Planetary Surface Materials.

    Science.gov (United States)

    Getty, Stephanie A.; Brinckerhoff, William B.; Cornish, Timothy; Li, Xiang; Floyd, Melissa; Arevalo, Ricardo Jr.; Cook, Jamie Elsila; Callahan, Michael P.

    2013-01-01

    Laser desorption/ionization time-of-flight mass spectrometry (LD-TOF-MS) holds promise to be a low-mass, compact in situ analytical capability for future landed missions to planetary surfaces. The ability to analyze a solid sample for both mineralogical and preserved organic content with laser ionization could be compelling as part of a scientific mission pay-load that must be prepared for unanticipated discoveries. Targeted missions for this instrument capability include Mars, Europa, Enceladus, and small icy bodies, such as asteroids and comets.

  10. A one-step method for modelling longitudinal data with differential equations.

    Science.gov (United States)

    Hu, Yueqin; Treinen, Raymond

    2018-04-06

    Differential equation models are frequently used to describe non-linear trajectories of longitudinal data. This study proposes a new approach to estimate the parameters in differential equation models. Instead of estimating derivatives from the observed data first and then fitting a differential equation to the derivatives, our new approach directly fits the analytic solution of a differential equation to the observed data, and therefore simplifies the procedure and avoids bias from derivative estimations. A simulation study indicates that the analytic solutions of differential equations (ASDE) approach obtains unbiased estimates of parameters and their standard errors. Compared with other approaches that estimate derivatives first, ASDE has smaller standard error, larger statistical power and accurate Type I error. Although ASDE obtains biased estimation when the system has sudden phase change, the bias is not serious and a solution is also provided to solve the phase problem. The ASDE method is illustrated and applied to a two-week study on consumers' shopping behaviour after a sale promotion, and to a set of public data tracking participants' grammatical facial expression in sign language. R codes for ASDE, recommendations for sample size and starting values are provided. Limitations and several possible expansions of ASDE are also discussed. © 2018 The British Psychological Society.

  11. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research.

  12. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research

  13. Turbine modelling for real time simulators

    International Nuclear Information System (INIS)

    Oliveira Barroso, A.C. de; Araujo Filho, F. de

    1992-01-01

    A model for vapor turbines and its peripherals has been developed. All the important variables have been included and emphasis has been given for the computational efficiency to obtain a model able to simulate all the modeled equipment. (A.C.A.S.)

  14. Web based health surveys: Using a Two Step Heckman model to examine their potential for population health analysis.

    Science.gov (United States)

    Morrissey, Karyn; Kinderman, Peter; Pontin, Eleanor; Tai, Sara; Schwannauer, Mathias

    2016-08-01

    In June 2011 the BBC Lab UK carried out a web-based survey on the causes of mental distress. The 'Stress Test' was launched on 'All in the Mind' a BBC Radio 4 programme and the test's URL was publicised on radio and TV broadcasts, and made available via BBC web pages and social media. Given the large amount of data created, over 32,800 participants, with corresponding diagnosis, demographic and socioeconomic characteristics; the dataset are potentially an important source of data for population based research on depression and anxiety. However, as respondents self-selected to participate in the online survey, the survey may comprise a non-random sample. It may be only individuals that listen to BBC Radio 4 and/or use their website that participated in the survey. In this instance using the Stress Test data for wider population based research may create sample selection bias. Focusing on the depression component of the Stress Test, this paper presents an easy-to-use method, the Two Step Probit Selection Model, to detect and statistically correct selection bias in the Stress Test. Using a Two Step Probit Selection Model; this paper did not find a statistically significant selection on unobserved factors for participants of the Stress Test. That is, survey participants who accessed and completed an online survey are not systematically different from non-participants on the variables of substantive interest. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Time series sightability modeling of animal populations.

    Science.gov (United States)

    ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R

    2018-01-01

    Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  16. The Influences of Time and Velocity of Inert Gas on the Quality of theProcessing Product of Graphite Matrix on the Baking Step

    International Nuclear Information System (INIS)

    Imam-Dahroni; Dwi-Herwidhi; NS, Kasilani

    2000-01-01

    The research of the synthesis of matrix graphite on the step of bakingprocess was conducted, by focusing on the influence of time and velocityvariables of the inert gas. The investigation on baking times ranging from 5minutes to 55 minutes and by varying the velocity of inert gas from 0.30l/minute to 3.60 l/minute, resulted the product of different matrix.Optimizing at the time of operation and the flow rate of argon gas indicatedthat the baking time for 30 minutes and by the flow rate of argon gas of 2.60l/minute resulted best matrix graphite that has a hardness value of 11kg/mm 2 of hardness and the ductility of 1800 Newton. (author)

  17. Time-varying impedance of the human ankle in the sagittal and frontal planes during straight walk and turning steps.

    Science.gov (United States)

    Ficanha, Evandro M; Ribeiro, Guilherme A; Knop, Lauren; Rastgaar, Mo

    2017-07-01

    This paper describes the methods and experiment protocols for estimation of the human ankle impedance during turning and straight line walking. The ankle impedance of two human subjects during the stance phase of walking in both dorsiflexion plantarflexion (DP) and inversion eversion (IE) were estimated. The impedance was estimated about 8 axes of rotations of the human ankle combining different amounts of DP and IE rotations, and differentiating among positive and negative rotations at 5 instants of the stance length (SL). Specifically, the impedance was estimated at 10%, 30%, 50%, 70% and 90% of the SL. The ankle impedance showed great variability across time, and across the axes of rotation, with consistent larger stiffness and damping in DP than IE. When comparing straight walking and turning, the main differences were in damping at 50%, 70%, and 90% of the SL with an increase in damping at all axes of rotation during turning.

  18. OTA-Grapes: A Mechanistic Model to Predict Ochratoxin A Risk in Grapes, a Step beyond the Systems Approach

    Directory of Open Access Journals (Sweden)

    Battilani Paola

    2015-08-01

    Full Text Available Ochratoxin A (OTA is a fungal metabolite dangerous for human and animal health due to its nephrotoxic, immunotoxic, mutagenic, teratogenic and carcinogenic effects, classified by the International Agency for Research on Cancer in group 2B, possible human carcinogen. This toxin has been stated as a wine contaminant since 1996. The aim of this study was to develop a conceptual model for the dynamic simulation of the A. carbonarius life cycle in grapes along the growing season, including OTA production in berries. Functions describing the role of weather parameters in each step of the infection cycle were developed and organized in a prototype model called OTA-grapes. Modelling the influence of temperature on OTA production, it emerged that fungal strains can be shared in two different clusters, based on the dynamic of OTA production and according to the optimal temperature. Therefore, two functions were developed, and based on statistical data analysis, it was assumed that the two types of strains contribute equally to the population. Model validation was not possible because of poor OTA contamination data, but relevant differences in OTA-I, the output index of the model, were noticed between low and high risk areas. To our knowledge, this is the first attempt to assess/model A. carbonarius in order to predict the risk of OTA contamination in grapes.

  19. Climatic change on the Gulf of Fonseca (Central America) using two-step statistical downscaling of CMIP5 model outputs

    Science.gov (United States)

    Ribalaygua, Jaime; Gaitán, Emma; Pórtoles, Javier; Monjo, Robert

    2018-05-01

    A two-step statistical downscaling method has been reviewed and adapted to simulate twenty-first-century climate projections for the Gulf of Fonseca (Central America, Pacific Coast) using Coupled Model Intercomparison Project (CMIP5) climate models. The downscaling methodology is adjusted after looking for good predictor fields for this area (where the geostrophic approximation fails and the real wind fields are the most applicable). The method's performance for daily precipitation and maximum and minimum temperature is analysed and revealed suitable results for all variables. For instance, the method is able to simulate the characteristic cycle of the wet season for this area, which includes a mid-summer drought between two peaks. Future projections show a gradual temperature increase throughout the twenty-first century and a change in the features of the wet season (the first peak and mid-summer rainfall being reduced relative to the second peak, earlier onset of the wet season and a broader second peak).

  20. Mixed-order phase transition in a two-step contagion model with a single infectious seed.

    Science.gov (United States)

    Choi, Wonjun; Lee, Deokjae; Kahng, B

    2017-02-01

    Percolation is known as one of the most robust continuous transitions, because its occupation rule is intrinsically local. As one of the ways to break the robustness, occupation is allowed to more than one species of particles and they occupy cooperatively. This generalized percolation model undergoes a discontinuous transition. Here we investigate an epidemic model with two contagion steps and characterize its phase transition analytically and numerically. We find that even though the order parameter jumps at a transition point r_{c}, then increases continuously, it does not exhibit any critical behavior: the fluctuations of the order parameter do not diverge at r_{c}. However, critical behavior appears in mean outbreak size, which diverges at the transition point in a manner that the ordinary percolation shows. Such a type of phase transition is regarded as a mixed-order phase transition. We also obtain scaling relations of cascade outbreak statistics when the order parameter jumps at r_{c}.