Short-term time step convergence in a climate model.
Wan, Hui; Rasch, Philip J; Taylor, Mark A; Jablonowski, Christiane
2015-03-01
This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral-element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process-coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid-scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full-physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4-considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid-scale physical parameterizations, the stratiform cloud schemes are associated with the largest time-stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time-stepping errors and identify the related model sensitivities.
Asynchronous adaptive time step in quantitative cellular automata modeling
Directory of Open Access Journals (Sweden)
Sun Yan
2004-06-01
Full Text Available Abstract Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment.
Accurate and stable time stepping in ice sheet modeling
Cheng, Gong; von Sydow, Lina
2016-01-01
In this paper we introduce adaptive time step control for simulation of evolution of ice sheets. The discretization error in the approximations is estimated using "Milne's device" by comparing the result from two different methods in a predictor-corrector pair. Using a predictor-corrector pair the expensive part of the procedure, the solution of the velocity and pressure equations, is performed only once per time step and an estimate of the local error is easily obtained. The stability of the numerical solution is maintained and the accuracy is controlled by keeping the local error below a given threshold using PI-control. Depending on the threshold, the time step $\\Delta t$ is bound by stability requirements or accuracy requirements. Our method takes a shorter $\\Delta t$ than an implicit method but with less work in each time step and the solver is simpler. The method is analyzed theoretically with respect to stability and applied to the simulation of a 2D ice slab and a 3D circular ice sheet. %The automatic...
Accurate and stable time stepping in ice sheet modeling
Cheng, Gong; Lötstedt, Per; von Sydow, Lina
2017-01-01
In this paper we introduce adaptive time step control for simulation of the evolution of ice sheets. The discretization error in the approximations is estimated using "Milne's device" by comparing the result from two different methods in a predictor-corrector pair. Using a predictor-corrector pair the expensive part of the procedure, the solution of the velocity and pressure equations, is performed only once per time step and an estimate of the local error is easily obtained. The stability of the numerical solution is maintained and the accuracy is controlled by keeping the local error below a given threshold using PI-control. Depending on the threshold, the time step Δt is bound by stability requirements or accuracy requirements. Our method takes a shorter Δt than an implicit method but with less work in each time step and the solver is simpler. The method is analyzed theoretically with respect to stability and applied to the simulation of a 2D ice slab and a 3D circular ice sheet. The stability bounds in the experiments are explained by and agree well with the theoretical results.
Modeling solute transport in distribution networks with variable demand and time step sizes.
Energy Technology Data Exchange (ETDEWEB)
Peyton, Chad E.; Bilisoly, Roger Lee; Buchberger, Steven G. (University of Cincinnati, Cincinnati, OH); McKenna, Sean Andrew; Yarrington, Lane
2004-06-01
The effect of variable demands at short time scales on the transport of a solute through a water distribution network has not previously been studied. We simulate flow and transport in a small water distribution network using EPANET to explore the effect of variable demand on solute transport across a range of hydraulic time step scales from 1 minute to 2 hours. We show that variable demands at short time scales can have the following effects: smoothing of a pulse of tracer injected into a distribution network and increasing the variability of both the transport pathway and transport timing through the network. Variable demands are simulated for these different time step sizes using a previously developed Poisson rectangular pulse (PRP) demand generator that considers demand at a node to be a combination of exponentially distributed arrival times with log-normally distributed intensities and durations. Solute is introduced at a tank and at three different network nodes and concentrations are modeled through the system using the Lagrangian transport scheme within EPANET. The transport equations within EPANET assume perfect mixing of the solute within a parcel of water and therefore physical dispersion cannot occur. However, variation in demands along the solute transport path contribute to both removal and distortion of the injected pulse. The model performance measures examined are the distribution of the Reynolds number, the variation in the center of mass of the solute across time, and the transport path and timing of the solute through the network. Variation in all three performance measures is greatest at the shortest time step sizes. As the scale of the time step increases, the variability in these performance measures decreases. The largest time steps produce results that are inconsistent with the results produced by the smaller time steps.
Efficient estimation of time-mean states of ocean models using 4D-Var and implicit time-stepping
Terwisscha van Scheltinga, A.D.; Dijkstra, H.A.
2007-01-01
We propose an efficient method for estimating a time-mean state of an ocean model subject to given observations using implicit time-stepping. The new method uses (i) an implicit implementation of the 4D-Var method to fit the model trajectory to the observations, and (ii) a preprocessor which applies
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-03-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.
Sensitivity of The High-resolution Wam Model With Respect To Time Step
Kasemets, K.; Soomere, T.
The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave
Step by step: Revisiting step tolling in the bottleneck model
Lindsey, C.R.; Berg, van den V.A.C.; Verhoef, E.T.
2010-01-01
In most dynamic traffic congestion models, congestion tolls must vary continuously over time to achieve the full optimum. This is also the case in Vickrey's (1969) 'bottleneck model'. To date, the closest approximations of this ideal in practice have so-called 'step tolls', in which the toll takes o
Derivation and application of time step model in solidification process simulation
2007-01-01
The heat transfer during the casting solidification process includes the heat radiation of the high temperature casting and the mold, the heat convection between the casting and the mold, and the heat conduction inside the casting and from the casting to the mold. In this paper, a formula of time step in simulation of solidification is derived, considering the heat radiation, convection and conduction based on the conservation of energy. The different heat transfer conditions between the conv...
BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics
Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.
2010-12-01
BIOMAP simulates competition between two Plant Functional Types (PFT) at any given point in the conterminous U.S. using a time series of daily temperature (mean, minimum, maximum), precipitation, humidity, light and nutrients, with PFT-specific rooting within a multi-layer soil. The model employs a 2-layer canopy biophysics, Farquhar photosynthesis, the Beer-Lambert Law for light attenuation and a mechanistic soil hydrology. In essence, BIOMAP is a re-built version of the biogeochemistry model, BIOME-BGC, into the form of the MAPSS biogeography model. Specific enhancements are: 1) the 2-layer canopy biophysics of Dolman (1993); 2) the unique MAPSS-based hydrology, which incorporates canopy evaporation, snow dynamics, infiltration and saturated and unsaturated percolation with ‘fast’ flow and base flow and a ‘tunable aquifer’ capacity, a metaphor of D’Arcy’s Law; and, 3) a unique MAPSS-based stomatal conductance algorithm, which simultaneously incorporates vapor pressure and soil water potential constraints, based on physiological information and many other improvements. Over small domains the PFTs can be parameterized as individual species to investigate fundamental vs. potential niche theory; while, at more coarse scales the PFTs can be rendered as more general functional groups. Since all of the model processes are intrinsically leaf to plot scale (physiology to PFT competition), it essentially has no ‘intrinsic’ scale and can be implemented on a grid of any size, taking on the characteristics defined by the homogeneous climate of each grid cell. Currently, the model is implemented on the VEMAP 1/2 degree, daily grid over the conterminous U.S. Although both the thermal and water-limited ecotones are dynamic, following climate variability, the PFT distributions remain fixed. Thus, the model is currently being fitted with a ‘reproduction niche’ to allow full dynamic operation as a Dynamic General Vegetation Model (DGVM). While global simulations
Directory of Open Access Journals (Sweden)
Malgorzata Peszynska
2015-12-01
Full Text Available In this paper, we consider a reduced computational model of methane hydrate formation in variable salinity conditions, and give details on the discretization and phase equilibria implementation. We describe three time-stepping variants: Implicit, Semi-implicit, and Sequential, and we compare the accuracy and efficiency of these variants depending on the spatial and temporal discretization parameters. We also study the sensitivity of the model to the simulation parameters and in particular to the reduced phase equilibria model.
Derivation and application of time step model in solidification process simulation
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The heat transfer during the casting solidification process includes the heat radiation of the high temperature casting and the mold, the heat convection between the casting and the mold, and the heat conduction inside the casting and from the casting to the mold. In this paper, a formula of time step in simulation of solidification is derived, considering the heat radiation, convection and conduction based on the conservation of energy. The different heat transfer conditions between the conventional sand casting and the permanent mold casting are taken into account in this formula. The characteristics of heat transfer in the interior and surface of the casting are also considered. The numerical experiments show that this formula can avoid computational dispersion, and improve the computational efficiency by about 20% in the simulation of solidification process.
Derivation and application of time step model in solidification process simulation
Directory of Open Access Journals (Sweden)
GONG Wen-bang
2007-08-01
Full Text Available The heat transfer during the casting solidification process includes the heat radiation of the high temperature casting and the mold, the heat convection between the casting and the mold, and the heat conduction inside the casting and from the casting to the mold. In this paper, a formula of time step in simulation of solidification is derived, considering the heat radiation, convection and conduction based on the conservation of energy. The different heat transfer conditions between the conventional sand casting and the permanent mold casting are taken into account in this formula. The characteristics of heat transfer in the interior and surface of the casting are also considered. The numerical experiments show that this formula can avoid computational dispersion, and improve the computational efficiency by about 20% in the simulation of solidification process.
Lockerby, Duncan A.; Duque-Daza, Carlos A.; Borg, Matthew K.; Reese, Jason M.
2012-05-01
In this paper we describe a numerical method for the efficient time-accurate coupling of hybrid continuum/molecular micro gas flow solvers. Hybrid approaches are commonly used when non-equilibrium effects in the flow field are spatially localized; in these regions a more accurate, but typically more expensive, solution procedure is adopted. Although this can greatly increase efficiency in steady flows, in unsteady flows the evolution of the solution as a whole is restricted by the maximum time step allowed by the molecular-based/kinetic model; numerically speaking, this is a stiff problem. In the method presented in this paper we exploit time-scale separation, when it exists, to partially decouple the temporal evolution of the two parts of the hybrid model. This affords major computational savings. The method is a modified/extended version of the seamless heterogeneous multiscale method (SHMM). Our approach allows multiple micro steps (molecular steps) before coupling with the macro (continuum) solver: we call this a multi-step SHMM. This maintains the main advantages of SHMM (computational speed-up and flexible application) while improving on accuracy and greatly reducing the number of continuum computations and instances of coupling required. The improved accuracy of the multi-step SHMM is demonstrated for two canonical one-dimensional transient flows (oscillatory Poiseuille and oscillatory Couette flow) and for rarefied-gas oscillatory Poiseuille flow.
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative
Short time step continuous rainfall modeling and simulation of extreme events
Callau Poduje, A. C.; Haberlandt, U.
2017-09-01
The design, planning, operation and overall assessment of urban drainage systems require long and continuous rain series in a high temporal resolution. Unfortunately, the availability of this data is usually short. Nevertheless a precipitation model could be used to tackle this shortcoming; therefore it is in the aim of this study to present a stochastic point precipitation model to reproduce average rainfall event properties along with extreme values. For this purpose a model is proposed to generate long synthetic series of rainfall for a temporal resolution of 5 min. It is based on an alternating renewal framework and events are characterized by variables describing durations, amounts and peaks. A group of 24 stations located in the north of Germany is used to set up and test the model. The adequate modeling of joint behaviour of rainfall amount and duration is found to be essential for reproducing the observed properties, especially for the extreme events. Copulas are advantageous tools for modeling these variables jointly; however caution must be taken in the selection of the proper copula. The inclusion of seasonality and small events is as well tested and found to be useful. The model is directly validated by generating long synthetic time series and comparing them with observed ones. An indirect validation is as well performed based on a fictional urban hydrological system. The proposed model is capable of reproducing seasonal behaviour and main characteristics of the rainfall events including extremes along with urban flooding and overflow behaviour. Overall the performance of the model is acceptable compared to the design practice. The proposed model is simple to interpret, fast to implement and to transfer to other regions, whilst showing acceptable results.
2016-12-01
computational power. Such simplifications can produce misleading results. For example, Radar Cross Section (RCS) effects in response to time-varying...and corresponding limitations of computational power. Such simplifications can produce misleading results. For example, Radar Cross Section (RCS...135 xvi Figure 6.1 The RCS of F-16 Falcon fighter model which is simulated by CST Studio software with signal frequency = 8 GHz. In (a), the RCS of
Institute of Scientific and Technical Information of China (English)
WANG Zhan-zhi; XIONG Ying
2013-01-01
A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency,torque balance,low fuel consumption,low cavitations,low noise performance and low hull vibration.Compared with the single-screw system,it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system.The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model.The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center.Compared with the experimental data,it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers,and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers,while a relatively large time step size is a better choice for CRPs with different blade numbers.
Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2015-04-01
Nowadays, most hydrological catchment models are designed to allow their use for streamflow simulation at different time-scales. While this permits models to be applied for broader purposes, it can also be a source of error in hydrological processes simulation at catchment scale. Those errors seem not to affect significantly simple conceptual models, but this flexibility may lead to large behavior errors in physically based models. Equations used in processes such as those related to soil moisture time-variation are usually representative at certain time-scales but they may not characterize properly water transfer in soil layers at larger scales. This effect is especially relevant as we move from detailed hourly scale to daily time-step, which are common time scales used at catchment streamflow simulation for different research and management practices purposes. This study aims to provide an objective methodology to identify the degree of similarity of optimal parameter values when hydrological catchment model calibration is developed at different time-scales. Thus, providing information for an informed discussion of physical parameter significance on hydrological models. In this research, we analyze the influence of time scale simulation on: 1) the optimal values of six highly sensitive parameters of the TOPLATS model and 2) the streamflow simulation efficiency, while optimization is carried out at different time scales. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) has been applied on its lumped version on three catchments of varying size located in northern Spain. The model has its basis on shallow groundwater gradients (related to local topography) that set up spatial patterns of soil moisture and are assumed to control infiltration and runoff during storm events and evaporation and drainage in between storm events. The model calculates the saturated portion of the catchment at each time step based on Topographical Index (TI) intervals. Surface
Discrete step model of helix-coil kinetics: Distribution of fluctuation times
Poland, Douglas
1996-07-01
A method is outlined for the computer simulation of the cooperative kinetics required to construct the distribution function for time intervals between fluctuations in conformational states in macromolecules. Using the helix-coil transition in polyamino acids as an example, we develop a Monte Carlo cellular automata approximation of the kinetics of this system in discrete time. This approximation is tested against a number of exact solutions for homopolymers and is then used to calculate moments of the distribution function for the time intervals between switches in conformational state at a given site (e.g., given a switch from coil to helix at zero time, how long will it take before the state switches back). The maximum-entropy method is used to construct the very broad distribution function from the moments. In heteropolymers the diffusion of helix-coil boundaries is reduced, helix being more localized on strong helix-forming residues. We investigate the effect of a specific sequence of amino acid residues on conformational fluctuations by using the known σ and s values for the naturally occurring amino acids to simulate the kinetics of helix formation (limiting the range of cooperativity to the α-helix) in sperm whale myoglobin, giving the time evolution to the equilibrium probability profile in this system.
Hornby, P. G.
2005-12-01
Understanding chemical and thermal processes taking place in hydrothermal mineral deposition systems could well be a key to unlocking new mineral reserves through improved targeting of exploration efforts. To aid in this understanding it is very helpful to be able to model such processes with sufficient fidelity to test process hypotheses. To gain understanding, it is often sufficient to obtain semi-quantitative results that model the broad aspects of the complex set of thermal and chemical effects taking place in hydrothermal systems. For example, it is often sufficient to gain an understanding of where thermal, geometric and chemical factors converge to precipitate gold (say) without being perfectly precise about how much gold is precipitated. The traditional approach is to use incompressible Darcy flow together with the Boussinesq approximation. From the flow field, the heat equation is used to advect-conduct the heat. The flow field is also used to transport solutes by solving an advection-dispersion-diffusion equation. The reactions in the fluid and between fluid and rock act as source terms for these advection-dispersion equations. Many existing modelling systems that are used for simulating such systems use explicit time marching schemes and finite differences. The disadvantage of this approach is the need to work on rectilinear grids and the number of time steps required by the Courant condition in the solute transport step. The second factor can be particularly significant if the chemical system is complex, requiring (at a minimum) an equilibrium calculation at each grid point at each time step. In the approach we describe, we use finite elements rather than finite differences, and the pressure, heat and advection-dispersion equations are solved implicitly. The general idea is to put unconditional numerical stability of the time integration first, and let accuracy assume a secondary role. It is in this sense that the method is semi-quantiative. However
Kennedy, Quinn; Taylor, Joy; Noda, Art; Yesavage, Jerome; Lazzeroni, Laura C
2015-09-01
Understanding the possible effects of the number of practice sessions (practice) and time between practice sessions (interval) among middle-aged and older adults in real-world tasks has important implications for skill maintenance. Prior training and cognitive ability may impact practice and interval effects on real-world tasks. In this study, we took advantage of existing practice data from 5 simulated flights among 263 middle-aged and older pilots with varying levels of flight expertise (defined by U.S. Federal Aviation Administration proficiency ratings). We developed a new Simultaneous Time Effects on Practice (STEP) model: (a) to model the simultaneous effects of practice and interval on performance of the 5 flights, and (b) to examine the effects of selected covariates (i.e., age, flight expertise, and 3 composite measures of cognitive ability). The STEP model demonstrated consistent positive practice effects, negative interval effects, and predicted covariate effects. Age negatively moderated the beneficial effects of practice. Additionally, cognitive processing speed and intraindividual variability (IIV) in processing speed moderated the benefits of practice and/or the negative influence of interval for particular flight performance measures. Expertise did not interact with practice or interval. Results indicated that practice and interval effects occur in simulated flight tasks. However, processing speed and IIV may influence these effects, even among high-functioning adults. Results have implications for the design and assessment of training interventions targeted at middle-aged and older adults for complex real-world tasks. (c) 2015 APA, all rights reserved).
Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.
2017-05-01
We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.
A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one
Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.
2011-01-01
The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,
Mikkili, Suresh; Panda, Anup Kumar; Prattipati, Jayanthi
2014-07-01
Nowadays the researchers want to develop their model in real-time environment. Simulation tools have been widely used for the design and improvement of electrical systems since the mid twentieth century. The evolution of simulation tools has progressed in step with the evolution of computing technologies. In recent years, computing technologies have improved dramatically in performance and become widely available at a steadily decreasing cost. Consequently, simulation tools have also seen dramatic performance gains and steady cost decreases. Researchers and engineers now have the access to affordable, high performance simulation tools that were previously too cost prohibitive, except for the largest manufacturers. This work has introduced a specific class of digital simulator known as a real-time simulator by answering the questions "what is real-time simulation", "why is it needed" and "how it works". The latest trend in real-time simulation consists of exporting simulation models to FPGA. In this article, the Steps involved for implementation of a model from MATLAB to REAL-TIME are provided in detail.
Chu, Chunlei
2009-01-01
We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.
Explicit Time-Stepping for Stiff ODEs
Eriksson, Kenneth; Logg, Anders; 10.1137/S1064827502409626
2012-01-01
We present a new strategy for solving stiff ODEs with explicit methods. By adaptively taking a small number of stabilizing small explicit time steps when necessary, a stiff ODE system can be stabilized enough to allow for time steps much larger than what is indicated by classical stability analysis. For many stiff problems the cost of the stabilizing small time steps is small, so the improvement is large. We illustrate the technique on a number of well-known stiff test problems.
Leimkuhler, Ben; Tuckerman, Mark E
2013-01-01
Molecular dynamics is one of the most commonly used approaches for studying the dynamics and statistical distributions of many physical, chemical, and biological systems using atomistic or coarse-grained models. It is often the case, however, that the interparticle forces drive motion on many time scales, and the efficiency of a calculation is limited by the choice of time step, which must be sufficiently small that the fastest force components are accurately integrated. Multiple time-stepping algorithms partially alleviate this inefficiency by assigning to each time scale an appropriately chosen step-size. However, such approaches are limited by resonance phenomena, wherein motion on the fastest time scales limits the step sizes associated with slower time scales. In atomistic models of biomolecular systems, for example, resonances limit the largest time step to around 5-6 fs. In this paper, we introduce a set of stochastic isokinetic equations of motion that are shown to be rigorously ergodic and that can b...
2016-12-01
computational power. Such simplifications can produce misleading results. For example, Radar Cross Section (RCS) effects in response to time-varying...and corresponding limitations of computational power. Such simplifications can produce misleading results. For example, Radar Cross Section (RCS...135 xvi Figure 6.1 The RCS of F-16 Falcon fighter model which is simulated by CST Studio software with signal frequency = 8 GHz. In (a), the RCS of
Carrander, Claes; Mousavi, Seyed Ali; Engdahl, G. öran
2017-02-01
In many transformer applications, it is necessary to have a core magnetization model that takes into account both magnetic and electrical effects. This becomes particularly important in three-phase transformers, where the zero-sequence impedance is generally high, and therefore affects the magnetization very strongly. In this paper, we demonstrate a time-step topological simulation method that uses a lumped-element approach to accurately model both the electrical and magnetic circuits. The simulation method is independent of the used hysteresis model. In this paper, a hysteresis model based on the first-order reversal-curve has been used.
Directory of Open Access Journals (Sweden)
Shanming Wang
2015-01-01
Full Text Available Now electric machines integrate with power electronics to form inseparable systems in lots of applications for high performance. For such systems, two kinds of nonlinearities, the magnetic nonlinearity of iron core and the circuit nonlinearity caused by power electronics devices, coexist at the same time, which makes simulation time-consuming. In this paper, the multiloop model combined with FE model of AC-DC synchronous generators, as one example of electric machine with power electronics system, is set up. FE method is applied for magnetic nonlinearity and variable-step variable-topology simulation method is applied for circuit nonlinearity. In order to improve the simulation speed, the incomplete Cholesky conjugate gradient (ICCG method is used to solve the state equation. However, when power electronics device switches off, the convergence difficulty occurs. So a straightforward approach to achieve convergence of simulation is proposed. At last, the simulation results are compared with the experiments.
Kaiser, Andreas; Neugirg, Fabian; Hass, Erik; Jose, Steffen; Haas, Florian; Schmidt, Jürgen
2016-04-01
In erosional research a variety of processes are well understood and have been mimicked under laboratory conditions. In complex natural systems such as Alpine environments a multitude of influencing factors tend to superimpose single processes in a mixed signal which impedes a reliable interpretation. These mixed signals can already be captured by geoscientific research approaches such as sediment collectors, erosion pins or remote sensing surveys. Nevertheless, they fail to distinguish between single processes and their individual impact on slope morphology. Throughout the last two years a highly active slope of unsorted glacial deposits in the northern Alps has been monitored by repeated terrestrial laser scans roughly every three months. Resulting high resolution digital elevation models of difference were produced to identify possible seasonal patterns. By reproducing the TLS results with a physically based erosion model (EROSION 3D) ran with in situ input data from rainfall simulations and a climate station a better understanding of individual mechanism could be achieved. However, the already elaborate combination of soil science and close range remote sensing could not answer all questions concerning the slopes behaviour, especially not for freeze and thaw cycles and the winter period. Therefore, an array of three fully automatic synchronised cameras was setup to generate continuous 3D surface models. Among the main challenges faced for the system were the energy supply and durability, perspectives of the cameras to avoid shadowing and to guarantee sufficient overlap, a certain robustness to withstand rough alpine weather conditions, the scaling of each 3D model by tracked ground control points and the automatic data handling. First results show individual processes sculpting the slope's morphology but further work is required to improve automatic point cloud creation and change monitoring.
DEFF Research Database (Denmark)
Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad
2013-01-01
is henceforth addressed as radiationReactingLTSFoam (rareLTSFoam). A performance benchmarking exercise is here carried out to evaluate the effect of each LTS parameter on calculation stability, results accuracy and computational runtime. The model validation uses two test cases. The first test case presents...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...
Physical modeling of stepped spillways
Stepped spillways applied to embankment dams are becoming popular for addressing the rehabilitation of aging watershed dams, especially those situated in the urban landscape. Stepped spillways are typically placed over the existing embankment, which provides for minimal disturbance to the original ...
High-resolution seismic wave propagation using local time stepping
Peter, Daniel
2017-03-13
High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.
Adaptive time steps in trajectory surface hopping simulations
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
Plante, Ianik; Devroye, Luc
2017-10-01
Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.
A NEW ALGORITHM OF TIME STEPPING IN DYNAMIC VISCOELASTIC PROBLEMS
Institute of Scientific and Technical Information of China (English)
杨海天; 高强; 郭杏林; 邬瑞锋
2001-01-01
A new scheme of time stepping for solving the dynamic viscoelastic problems are presented. By expanding variables at a discrete time interval, FEM based recurrent formulae are derived. A self-adaptive algorithm for different sizes of time steps can be carried out to improve computing accuracy. Numerical validation shows satisfactory performance.
Combating cancer one step at a time
Directory of Open Access Journals (Sweden)
R.N Sugitha Nadarajah
2016-10-01
widespread consequences, not only in a medical sense but also socially and economically,” says Dr. Abdel-Rahman. “We need to put in every effort to combat this fatal disease,” he adds.Tackling the spread of cancer and the increase in the number of cases reported every year is not without its challenges, he asserts. “I see the key challenges as the unequal availability of cancer treatments worldwide, the increasing cost of cancer treatment, and the increased median age of the population in many parts of the world, which carries with it a consequent increase in the risk of certain cancers,” he says. “We need to reassess the current pace and orientation of cancer research because, with time, cancer research is becoming industry-oriented rather than academia-oriented — which, in my view, could be very dangerous to the future of cancer research,” adds Dr. Abdel-Rahman. “Governments need to provide more research funding to improve the outcome of cancer patients,” he explains.His efforts and hard work have led to him receiving a number of distinguished awards, namely the UICC International Cancer Technology Transfer (ICRETT fellowship in 2014 at the Investigational New Drugs Unit in the European Institute of Oncology, Milan, Italy; EACR travel fellowship in 2015 at The Christie NHS Foundation Trust, Manchester, UK; and also several travel grants to Ireland, Switzerland, Belgium, Spain, and many other countries where he attended medical conferences. Dr. Abdel-Rahman is currently engaged in a project to establish a clinical/translational cancer research center at his institute, which seeks to incorporate various cancer-related disciplines in order to produce a real bench-to-bedside practice, hoping that it would “change research that may help shape the future of cancer therapy”.Dr. Abdel-Rahman is also an active founding member of the clinical research unit at his institute and is a representative to the prestigious European Organization for Research and
Indian Academy of Sciences (India)
K T Kashyap; C Ramachandra; B Chatterji; S Lele
2000-10-01
In commercial practice, two-step ageing is commonly used in Al–Zn–Mg alloys to produce a fine dispersion of ′ precipitates to accentuate the mechanical properties and resistance to stress corrosion cracking. While this is true in Al–Zn–Mg alloys, two-step ageing leads to inferior properties in Al–Mg–Si alloys. This controversial behaviour in different alloys can be explained by Pashley’s Kinetic model. Pashley’s model addresses the stability of clusters after two-step ageing. In the development of the model, the surface energy term between cluster and matrix is taken into account while the coherency strains between the cluster and matrix are not considered. In the present work, a model is developed which takes into account the coherency strains between cluster and matrix and defines a new stability criterion, inclusive of strain energy term. Experiments were done on AA 7010 aluminium alloy by carrying out a two-step ageing treatment and the results fit the new stability criterion. Thus it is found that the new model for two-step ageing is verified in the case of Al–Zn–Mg alloy.
Diffeomorphic image registration with automatic time-step adjustment
DEFF Research Database (Denmark)
Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst;
2015-01-01
In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required...
STEP - Product Model Data Sharing and Exchange
DEFF Research Database (Denmark)
Kroszynski, Uri
1998-01-01
- Product Data Representation and Exchange", featuring at present some 30 released parts, and growing continuously. Many of the parts are Application Protocols (AP). This article presents an overview of STEP, based upon years of involvement in three ESPRIT projects, which contributed to the development......During the last fifteen years, a very large effort to standardize the product models employed in product design, manufacturing and other life-cycle phases has been undertaken. This effort has the acronym STEP, and resulted in the International Standard ISO-10303 "Industrial Automation Systems...
A time stepping method in analysis of nonlinear structural dynamics
Directory of Open Access Journals (Sweden)
Gholampour A. A.
2011-12-01
Full Text Available In this paper a new method is proposed for the direct time integration method for structural dynamics problems. The proposed method assumes second order variations of the acceleration at each time step. Therefore more terms in the Taylor series expansion were used compared to other methods. Because of the increase in order of variations of acceleration, this method has higher accuracy than classical methods. The displacement function is a polynomial with five constants and they are calculated using: two equations for initial conditions (from the end of previous time step, two equations for satisfying the equilibrium at both ends of the time step, and one equation for the weighted residual integration. Proposed method has higher stability and order of accuracy than the other methods.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max
2016-11-25
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf
2017-04-01
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
New Multi-step Worm Attack Model
Robiah, Y; Shahrin, S; Faizal, M A; Zaki, M Mohd; Marliza, R
2010-01-01
The traditional worms such as Blaster, Code Red, Slammer and Sasser, are still infecting vulnerable machines on the internet. They will remain as significant threats due to their fast spreading nature on the internet. Various traditional worms attack pattern has been analyzed from various logs at different OSI layers such as victim logs, attacker logs and IDS alert log. These worms attack pattern can be abstracted to form worms' attack model which describes the process of worms' infection. For the purpose of this paper, only Blaster variants were used during the experiment. This paper proposes a multi-step worm attack model which can be extended into research areas in alert correlation and computer forensic investigation.
Multi-time-step domain coupling method with energy control
DEFF Research Database (Denmark)
Mahjoubi, N.; Krenk, Steen
2010-01-01
the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...
4 Steps for Redesigning Time for Student and Teacher Learning
Nazareno, Lori
2017-01-01
Everybody complains about a lack of time in school, but few are prepared to do anything about it. Laying the foundation before making such a shift is essential to the success of the change. Once a broad-based team has been chosen to do the work, they can follow a process explained in four steps with the apt acronym of T.I.M.E.: Taking stock,…
Modified precise time step integration method of structural dynamic analysis
Institute of Scientific and Technical Information of China (English)
Wang Mengfu; Zhou Xiyuan
2005-01-01
The precise time step integration method proposed for linear time-invariant homogeneous dynamic systems can provide precise numerical results that approach an exact solution at the integration points. However, difficulty arises when the algorithm is used for non-homogeneous dynamic systems, due to the inverse matrix calculation and the simulation accuracy of the applied loading. By combining the Gaussian quadrature method and state space theory with the calculation technique of matrix exponential function in the precise time step integration method, a new modified precise time step integration method (e.g., an algorithm with an arbitrary order of accuracy) is proposed. In the new method, no inverse matrix calculation or simulation of the applied loading is needed, and the computing efficiency is improved. In particular, the proposed method is independent of the quality of the matrix H. If the matrix H is singular or nearly singular, the advantage of the method is remarkable. The numerical stability of the proposed algorithm is discussed and a numerical example is given to demonstrate the validity and efficiency of the algorithm.
Integrated Modelling - the next steps (Invited)
Moore, R. V.
2010-12-01
Integrated modelling (IM) has made considerable advances over the past decade but it has not yet been taken up as an operational tool in the way that its proponents had hoped. The reasons why will be discussed in Session U17. This talk will propose topics for a research and development programme and suggest an institutional structure which, together, could overcome the present obstacles. Their combined aim would be first to make IM into an operational tool useable by competent public authorities and commercial companies and, in time, to see it evolve into the modelling equivalent of Google Maps, something accessible and useable by anyone with a PC or an iphone and an internet connection. In a recent study, a number of government agencies, water authorities and utilities applied integrated modelling to operational problems. While the project demonstrated that IM could be used in an operational setting and had benefit, it also highlighted the advances that would be required for its widespread uptake. These were: greatly improving the ease with which models could be a) made linkable, b) linked and c) run; developing a methodology for applying integrated modelling; developing practical options for calibrating and validating linked models; addressing the science issues that arise when models are linked; extending the range of modelling concepts that can be linked; enabling interface standards to pass uncertainty information; making the interface standards platform independent; extending the range of platforms to include those for high performance computing; developing the concept of modelling components as web services; separating simulation code from the model’s GUI, so that all the results from the linked models can be viewed through a single GUI; developing scenario management systems so that that there is an audit trail of the version of each model and dataset used in each linked model run. In addition to the above, there is a need to build a set of integrated
Semiclassical two-step model for strong-field ionization
Shvetsov-Shilovski, N I; Madsen, L B; Räsänen, E; Lemell, C; Burgdörfer, J; Arbó, D G; Tőkési, K
2016-01-01
We present a semiclassical two-step model for strong-field ionization that accounts for path interferences of tunnel-ionized electrons in the ionic potential beyond perturbation theory. Within the framework of a classical trajectory Monte-Carlo representation of the phase-space dynamics, the model employs the semiclassical approximation to the phase of the full quantum propagator in the exit channel. By comparison with the exact numerical solution of the time-dependent Schr\\"odinger equation for strong-field ionization of hydrogen, we show that for suitable choices of the momentum distribution after the first tunneling step, the model yields good quantitative agreement with the full quantum simulation. The two-dimensional photoelectron momentum distributions, the energy spectra, and the angular distributions are found to be in good agreement with the corresponding quantum results. Specifically, the model quantitatively reproduces the fan-like interference patterns in the low-energy part of the two-dimensional...
Crowder, D W; Onstad, D W; Cray, M E; Pierce, C M F; Hager, A G; Ratcliffe, S T; Steffey, K L
2005-04-01
Western corn rootworm, Diabrotica virgifera virgifera LeConte, has overcome crop rotation in several areas of the north central United States. The effectiveness of crop rotation for management of corn rootworm has begun to fail in many areas of the midwestern United States, thus new management strategies need to be developed to control rotation-resistant populations. Transgenic corn, Zea mays L., effective against western corn rootworm, may be the most effective new technology for control of this pest in areas with or without populations adapted to crop rotation. We expanded a simulation model of the population dynamics and genetics of the western corn rootworm for a landscape of corn; soybean, Glycine max (L.); and other crops to study the simultaneous development of resistance to both crop rotation and transgenic corn. Results indicate that planting transgenic corn to first-year cornfields is a robust strategy to prevent resistance to both crop rotation and transgenic corn in areas where rotation-resistant populations are currently a problem or may be a problem in the future. In these areas, planting transgenic corn only in continuous cornfields is not an effective strategy to prevent resistance to either trait. In areas without rotation-resistant populations, gene expression of the allele for resistance to transgenic corn, R, is the most important factor affecting the evolution of resistance. If R is recessive, resistance can be delayed longer than 15 yr. If R is dominant, resistance may be difficult to prevent. In a sensitivity analysis, results indicate that density dependence, rotational level in the landscape, and initial allele frequency are the three most important factors affecting the results.
Sharing Steps in the Workplace: Changing Privacy Concerns Over Time
DEFF Research Database (Denmark)
Jensen, Nanna Gorm; Shklovski, Irina
2016-01-01
study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers......Personal health technologies are increasingly introduced in workplace settings. Yet little is known about workplace implementations of activity tracker use and the kind of experiences and concerns employees might have when engaging with these technologies in practice. We report on an observational...
Time step size limitation introduced by the BSSN Gamma Driver
Energy Technology Data Exchange (ETDEWEB)
Schnetter, Erik, E-mail: schnetter@cct.lsu.ed [Department of Physics and Astronomy, Louisiana State University, LA (United States)
2010-08-21
Many mesh refinement simulations currently performed in numerical relativity counteract instabilities near the outer boundary of the simulation domain either by changes to the mesh refinement scheme or by changes to the gauge condition. We point out that the BSSN Gamma Driver gauge condition introduces a time step size limitation in a similar manner as a Courant-Friedrichs-Lewy condition, but which is independent of the spatial resolution. We give a didactic explanation of this issue, show why, especially, mesh refinement simulations suffer from it, and point to a simple remedy. (note)
Sequential time-step generation companies decisions in oligopolistic electricity market
Energy Technology Data Exchange (ETDEWEB)
Gutierrez-Alcaraz, Guillermo [Programa de Graduados e Investigacion en Ingenieria Electrica, Departamento de Ingenieria Electrica y Electronica del Instituto Tecnologico de Morelia, Morelia, Michoacan (Mexico)
2008-05-15
This paper studies the production decisions of generation companies (GENCOs) which are fully engaged in oligopolistic electricity markets. The model presented is based upon the static equilibrium model solved sequentially in time. By decomposing the problem in time, each time-step is solved independently using a Cournot-like market model. The time dimension is divided into discrete, 1-h time-steps. The model also incorporates the effects of technical and temporal constraints such as time on/off and ramp up/down. Since GENCOs tend toward repetitive decision-making, they can more easily learn from the market. The concept of forward expectations and the lessons derived from the market are introduced, and several numerical examples are provided. (author)
Modeling the stepping mechanism in negative lightning leaders
Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir
2017-04-01
It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.
NUMERICAL SIMULATION FOR THE STEPPED SPILLWAY OVERFLOW WITH TURBULENCE MODEL
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Stepped spillways have increasingly become a very important measure for flood discharge and energy dissipation. Therefore, the velocity, pressure and other characteristics of the flow on the stepped spillway should be known clearly. But so far the study for the stepped spillway overflow is only based on the model test. In this paper, the stepped spillway overflow was simulated by the Reynolds stress turbulence model. The simulation results were analyzed and compared with measured data, which shows they are satisfactory.
Rautenberg, Tamlyn; Hulme, Claire; Edlin, Richard
2016-01-01
Background Although guidance on good research practice in health economic modeling is widely available, there is still a need for a simpler instructive resource which could guide a beginner modeler alongside modeling for the first time. Aim To develop a beginner’s guide to be used as a handheld guide contemporaneous to the model development process. Methods A systematic review of best practice guidelines was used to construct a framework of steps undertaken during the model development process. Focused methods review supplemented this framework. Consensus was obtained among a group of model developers to review and finalize the content of the preliminary beginner’s guide. The final beginner’s guide was used to develop cost-effectiveness models. Results Thirty-two best practice guidelines were data extracted, synthesized, and critically evaluated to identify steps for model development, which formed a framework for the beginner’s guide. Within five phases of model development, eight broad submethods were identified and 19 methodological reviews were conducted to develop the content of the draft beginner’s guide. Two rounds of consensus agreement were undertaken to reach agreement on the final beginner’s guide. To assess fitness for purpose (ease of use and completeness), models were developed independently and by the researcher using the beginner’s guide. Conclusion A combination of systematic review, methods reviews, consensus agreement, and validation was used to construct a step-by-step beginner’s guide for developing decision analytical cost-effectiveness models. The final beginner’s guide is a step-by-step resource to accompany the model development process from understanding the problem to be modeled, model conceptualization, model implementation, and model checking through to reporting of the model results. PMID:27785080
Exact relativistic time evolution for a step potential barrier
Villavicencio, J
2000-01-01
We derive an exact analytic solution to a Klein-Gordon equation for a step potential barrier with cutoff plane wave initial conditions, in order to explore wave evolution in a classical forbidden region. We find that the relativistic solution rapidly evanesces within a depth $2x_p$ inside the potential, where $x_p$ is the penetration length of the stationary solution. Beyond the characteristic distance $2x_p$, a Sommerfeld-type precursor travels along the potential at the speed of light, $c$. However, no spatial propagation of a main wavefront along the structure is observed. We also find a non-causal time evolution of the wavefront peak. The effect is only an apparent violation of Einstein causality.
Six-step reasoning model for robot-soccer
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The decision-making system of robot-soccer is a kind of knowledge system. A Six-step Reasoning Model is established by formalizing its expert knowledge and decision-making process. Furthermore, many other models can be considered as mutation and evolution of the Six-step Reasoning Model.
Explicit local time-stepping methods for time-dependent wave propagation
Grote, Marcus
2012-01-01
Semi-discrete Galerkin formulations of transient wave equations, either with conforming or discontinuous Galerkin finite element discretizations, typically lead to large systems of ordinary differential equations. When explicit time integration is used, the time-step is constrained by the smallest elements in the mesh for numerical stability, possibly a high price to pay. To overcome that overly restrictive stability constraint on the time-step, yet without resorting to implicit methods, explicit local time-stepping schemes (LTS) are presented here for transient wave equations either with or without damping. In the undamped case, leap-frog based LTS methods lead to high-order explicit LTS schemes, which conserve the energy. In the damped case, when energy is no longer conserved, Adams-Bashforth based LTS methods also lead to explicit LTS schemes of arbitrarily high accuracy. When combined with a finite element discretization in space with an essentially diagonal mass matrix, the resulting time-marching scheme...
Multiple-time-stepping generalized hybrid Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Continuous Time Model Estimation
Carl Chiarella; Shenhuai Gao
2004-01-01
This paper introduces an easy to follow method for continuous time model estimation. It serves as an introduction on how to convert a state space model from continuous time to discrete time, how to decompose a hybrid stochastic model into a trend model plus a noise model, how to estimate the trend model by simulation, and how to calculate standard errors from estimation of the noise model. It also discusses the numerical difficulties involved in discrete time models that bring about the unit ...
Two-step variable selection in quantile regression models
Directory of Open Access Journals (Sweden)
FAN Yali
2015-06-01
Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.
Directory of Open Access Journals (Sweden)
Rautenberg T
2016-10-01
Full Text Available Tamlyn Rautenberg,1 Claire Hulme,2 Richard Edlin,3 1Health Economics and HIV/AIDS Research Division (HEARD, University of Kwazulu Natal, KwaZulu Natal, South Africa; 2Leeds Institute of Health Sciences (LIHS, Academic Unit of Health Economics (AUHE, University of Leeds, West Yorkshire, United Kingdom; 3Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand Background: Although guidance on good research practice in health economic modeling is widely available, there is still a need for a simpler instructive resource which could guide a beginner modeler alongside modeling for the first time. Aim: To develop a beginner’s guide to be used as a handheld guide contemporaneous to the model development process. Methods: A systematic review of best practice guidelines was used to construct a framework of steps undertaken during the model development process. Focused methods review supplemented this framework. Consensus was obtained among a group of model developers to review and finalize the content of the preliminary beginner’s guide. The final beginner’s guide was used to develop cost-effectiveness models. Results: Thirty-two best practice guidelines were data extracted, synthesized, and critically evaluated to identify steps for model development, which formed a framework for the beginner’s guide. Within five phases of model development, eight broad submethods were identified and 19 methodological reviews were conducted to develop the content of the draft beginner’s guide. Two rounds of consensus agreement were undertaken to reach agreement on the final beginner’s guide. To assess fitness for purpose (ease of use and completeness, models were developed independently and by the researcher using the beginner’s guide. Conclusion: A combination of systematic review, methods reviews, consensus agreement, and validation was used to construct a step-by-step beginner’s guide for developing decision analytical
Formalising the Continuous/Discrete Modeling Step
Directory of Open Access Journals (Sweden)
Wen Su
2011-06-01
Full Text Available Formally capturing the transition from a continuous model to a discrete model is investigated using model based refinement techniques. A very simple model for stopping (eg. of a train is developed in both the continuous and discrete domains. The difference between the two is quantified using generic results from ODE theory, and these estimates can be compared with the exact solutions. Such results do not fit well into a conventional model based refinement framework; however they can be accommodated into a model based retrenchment. The retrenchment is described, and the way it can interface to refinement development on both the continuous and discrete sides is outlined. The approach is compared to what can be achieved using hybrid systems techniques.
Formalising the Continuous/Discrete Modeling Step
Banach, Richard; Su, Wen; Huang, Runlei; 10.4204/EPTCS.55.8
2011-01-01
Formally capturing the transition from a continuous model to a discrete model is investigated using model based refinement techniques. A very simple model for stopping (eg. of a train) is developed in both the continuous and discrete domains. The difference between the two is quantified using generic results from ODE theory, and these estimates can be compared with the exact solutions. Such results do not fit well into a conventional model based refinement framework; however they can be accommodated into a model based retrenchment. The retrenchment is described, and the way it can interface to refinement development on both the continuous and discrete sides is outlined. The approach is compared to what can be achieved using hybrid systems techniques.
The USMLE Step 2 CS: Time for a change.
Alvin, Matthew D
2016-08-01
The United States Medical Licensing Examination (USMLE(®)) Steps are a series of mandatory licensing assessments for all allopathic (MD degree) medical students in their transition from student to intern to resident physician. Steps 1, 2 Clinical Knowledge (CK), and 3 are daylong multiple-choice exams that quantify a medical student's basic science and clinical knowledge as well as their application of that knowledge using a three-digit score. In doing so, these Steps provide a standardized assessment that residency programs use to differentiate applicants and evaluate their competitiveness. Step 2 Clinical Skills (CS), the only other Step exam and the second component of Step 2, was created in 2004 to test clinical reasoning and patient-centered skills. As a Pass/Fail exam without a numerical scoring component, Step 2 CS provides minimal differentiation among applicants for residency programs. In this personal view article, it is argued that the current Step 2 CS exam should be eliminated for US medical students and propose an alternative consistent with the mission and purpose of the exam that imposes less of a burden on medical students.
Transport Simulation Model Calibration with Two-Step Cluster Analysis Procedure
Directory of Open Access Journals (Sweden)
Zenina Nadezda
2015-12-01
Full Text Available The calibration results of transport simulation model depend on selected parameters and their values. The aim of the present paper is to calibrate a transport simulation model by a two-step cluster analysis procedure to improve the reliability of simulation model results. Two global parameters have been considered: headway and simulation step. Normal, uniform and exponential headway generation models have been selected for headway. Application of two-step cluster analysis procedure to the calibration procedure has allowed reducing time needed for simulation step and headway generation model value selection.
Time-step limits for a Monte Carlo Compton-scattering method
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory
2008-01-01
Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the
Semiclassical two-step model for strong-field ionization
Shvetsov-Shilovski, N. I.; Lein, M.; Madsen, L. B.; Räsänen, E.; Lemell, C.; Burgdörfer, J.; Arbó, D. G.; Tőkési, K.
2016-07-01
We present a semiclassical two-step model for strong-field ionization that accounts for path interferences of tunnel-ionized electrons in the ionic potential beyond perturbation theory. Within the framework of a classical trajectory Monte Carlo representation of the phase-space dynamics, the model employs the semiclassical approximation to the phase of the full quantum propagator in the exit channel. By comparison with the exact numerical solution of the time-dependent Schrödinger equation for strong-field ionization of hydrogen, we show that for suitable choices of the momentum distribution after the first tunneling step, the model yields good quantitative agreement with the full quantum simulation. The two-dimensional photoelectron momentum distributions, the energy spectra, and the angular distributions are found to be in good agreement with the corresponding quantum results. Specifically, the model quantitatively reproduces the fanlike interference patterns in the low-energy part of the two-dimensional momentum distributions, as well as the modulations in the photoelectron angular distributions.
Steps towards improvement of Latvian geoid model
Janpaule, Inese; Balodis, Janis
2013-04-01
The high precision geoid model is essential for the normal height determination when the GNSS positioning methods are used. In Latvia for more than 10 years gravimetric geoid model LV'98 is broadely used by surveyors and scientists. The computation of this model was performed using GRAVSOFT software using gravimetric measurements, digitised gravimetric data and satellite altimetry data over Baltic sea, the estimated accuracy of LV'98 geoid model is 6-8cm. (J. Kaminskis, 2010) However, the accuracy of Latvian geoid model should be improved. In order to aacomplish this task, the evaluation of several methods and test computations have been made. KTH method was developed at the Royal Institute of Technology (KTH) in Stockholm. This method utilizes the least-squares modification of the Stokes integral for the biased, unbiased, and optimum stochastic solutions. The modified Bruns-Stokes integral combines the regional terrestrial gravity data with a global geopotential model (GGM) (R. Kiamehr, 2006). DFHRS (Digital Finite-Element Height Reference Surface) method has been developed at the Karlsruhe University of Applied Sciences, Faculty of Geomatics (R. Jäger, 1999). In the DFHRS concept the area is divided into smaller finite elements - meshes. The height reference surface N in each mesh is calculated by a polynomial in term of (x,y) coordinates. Each group of meshes form a patch, which are related to a set of individual parameters, which are introduced by the datum parametrizations. As an input data the European Gravimetric Geoid Model 1997 (EGG97) and 102 GNSS/levelling points were used. In order to improve the Latvian geoid model quality and accuracy the development of mobile digital zenith telescope for determination of vertical deflections with 0.1" expected accuracy is commenced at University of Latvia, Institute of Geodesy and Geoinformation. The project was started in 2010, the goal of it is to design a portable, cheap and robust instrument, using industrially
New Multi-step Worm Attack Model
Robiah, Y.; Rahayu, S. Siti; Shahrin , S.; M. FAIZAL A.; Zaki, M. Mohd; Marliza, R.
2010-01-01
The traditional worms such as Blaster, Code Red, Slammer and Sasser, are still infecting vulnerable machines on the internet. They will remain as significant threats due to their fast spreading nature on the internet. Various traditional worms attack pattern has been analyzed from various logs at different OSI layers such as victim logs, attacker logs and IDS alert log. These worms attack pattern can be abstracted to form worms' attack model which describes the process of worms' infection. Fo...
A step-by-step procedure for pH model construction in aquatic systems
Directory of Open Access Journals (Sweden)
A. F. Hofmann
2007-10-01
Full Text Available We present, by means of a simple example, a comprehensive step-by-step procedure to consistently derive a pH model of aquatic systems. As pH modeling is inherently complex, we make every step of the model generation process explicit, thus ensuring conceptual, mathematical, and chemical correctness. Summed quantities, such as total inorganic carbon and total alkalinity, and the influences of modeled processes on them are consistently derived. The model is subsequently reformulated until numerically and computationally simple dynamical solutions, like a variation of the operator splitting approach (OSA and the direct substitution approach (DSA, are obtained. As several solution methods are pointed out, connections between previous pH modelling approaches are established. The final reformulation of the system according to the DSA allows for quantification of the influences of kinetic processes on the rate of change of proton concentration in models containing multiple biogeochemical processes. These influences are calculated including the effect of re-equilibration of the system due to a set of acid-base reactions in local equilibrium. This possibility of quantifying influences of modeled processes on the pH makes the end-product of the described model generation procedure a powerful tool for understanding the internal pH dynamics of aquatic systems.
Variable time-stepping in the pathwise numerical solution of the chemical Langevin equation.
Ilie, Silvana
2012-12-21
Stochastic modeling is essential for an accurate description of the biochemical network dynamics at the level of a single cell. Biochemically reacting systems often evolve on multiple time-scales, thus their stochastic mathematical models manifest stiffness. Stochastic models which, in addition, are stiff and computationally very challenging, therefore the need for developing effective and accurate numerical methods for approximating their solution. An important stochastic model of well-stirred biochemical systems is the chemical Langevin Equation. The chemical Langevin equation is a system of stochastic differential equation with multidimensional non-commutative noise. This model is valid in the regime of large molecular populations, far from the thermodynamic limit. In this paper, we propose a variable time-stepping strategy for the numerical solution of a general chemical Langevin equation, which applies for any level of randomness in the system. Our variable stepsize method allows arbitrary values of the time-step. Numerical results on several models arising in applications show significant improvement in accuracy and efficiency of the proposed adaptive scheme over the existing methods, the strategies based on halving/doubling of the stepsize and the fixed step-size ones.
Ehrenfest's theorem and the validity of the two-step model for strong-field ionization
DEFF Research Database (Denmark)
Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer
By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...
Step detection in single-molecule real time trajectories embedded in correlated noise.
Directory of Open Access Journals (Sweden)
Srikesh G Arunajadai
Full Text Available Single-molecule real time trajectories are embedded in high noise. To extract kinetic or dynamic information of the molecules from these trajectories often requires idealization of the data in steps and dwells. One major premise behind the existing single-molecule data analysis algorithms is the gaussian 'white' noise, which displays no correlation in time and whose amplitude is independent on data sampling frequency. This so-called 'white' noise is widely assumed but its validity has not been critically evaluated. We show that correlated noise exists in single-molecule real time trajectories collected from optical tweezers. The assumption of white noise during analysis of these data can lead to serious over- or underestimation of the number of steps depending on the algorithms employed. We present a statistical method that quantitatively evaluates the structure of the underlying noise, takes the noise structure into account, and identifies steps and dwells in a single-molecule trajectory. Unlike existing data analysis algorithms, this method uses Generalized Least Squares (GLS to detect steps and dwells. Under the GLS framework, the optimal number of steps is chosen using model selection criteria such as Bayesian Information Criterion (BIC. Comparison with existing step detection algorithms showed that this GLS method can detect step locations with highest accuracy in the presence of correlated noise. Because this method is automated, and directly works with high bandwidth data without pre-filtering or assumption of gaussian noise, it may be broadly useful for analysis of single-molecule real time trajectories.
Dependence of Hurricane Intensity and Structures on Vertical Resolution and Time-Step Size
Institute of Scientific and Technical Information of China (English)
Da-Lin ZHANG; Xiaoxue WANG
2003-01-01
In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varyingvertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) arestudied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR)mesoscale model (i.e., MMS) with the finest grid size of 6 km. It is shown that changing vertical resolutionand time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, butlittle impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeperstorm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similareffects, but to a less extent, occur when the time-step size is reduced. It is found that increasing thelow-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-levelvertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layertends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution,a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable tomodel more realistically the intensity and inner-core structures and evolution of tropical storms as well asthe other convectively driven weather systems.
Kuraz, Michal
2016-06-01
Modelling the transport processes in a vadose zone, e.g. modelling contaminant transport or the effect of the soil water regime on changes in soil structure and composition, plays an important role in predicting the reactions of soil biotopes to anthropogenic activity. Water flow is governed by the quasilinear Richards equation. The paper concerns the implementation of a multi-time-step approach for solving a nonlinear Richards equation. When modelling porous media flow with a Richards equation, due to a possible convection dominance and a convergence of a nonlinear solver, a stable finite element approximation requires accurate temporal and spatial integration. The method presented here enables adaptive domain decomposition algorithm together with a multi-time-step treatment of actively changing subdomains.
Two step processes for meson production at the time of flight spectrometer at cosy.
Hassan, A M
2000-01-01
in this work the contribution of the two step mechanism to the cross section of the reaction pd-> sup 3 He eta is presented in a model calculation. a simple approach is used where relativistic kinematics and empirical cross sections are employed. the on-and off -shell effects and the selective fusion of the baryons to sup 3 He on the velocity matching concept are described . for the first time the folding process includes the angular and energy dependence of the two subsequent steps (step A:pp -> d pi sup + and step B:pi sup + n-> eta p) in the simulation . the angular and energy dependences and the fusion probability of the baryons to a bound baryonic system ( sup 3 He )are used to weigh the events in each corresponding step. the velocity matching is the reason for selective fusion of the baryons to sup 3 He. the angular distribution predicted by the two step processes shows a forward peak of sup 3 He in the center of mass system except in a small range of 0 to 10 MeV excess energy where the cross section is...
Development of a real time activity monitoring Android application utilizing SmartStep.
Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward
2016-08-01
Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.
Directory of Open Access Journals (Sweden)
Damiano Monelli
2010-11-01
Full Text Available We present here two self-consistent implementations of a short-term earthquake probability (STEP model that produces daily seismicity forecasts for the area of the Italian national seismic network. Both implementations combine a time-varying and a time-invariant contribution, for which we assume that the instrumental Italian earthquake catalog provides the best information. For the time-invariant contribution, the catalog is declustered using the clustering technique of the STEP model; the smoothed seismicity model is generated from the declustered catalog. The time-varying contribution is what distinguishes the two implementations: 1 for one implementation (STEP-LG, the original model parameterization and estimation is used; 2 for the other (STEP-NG, the mean abundance method is used to estimate aftershock productivity. In the STEP-NG implementation, earthquakes with magnitude up to ML= 6.2 are expected to be less productive compared to the STEP-LG implementation, whereas larger earthquakes are expected to be more productive. We have retrospectively tested the performance of these two implementations and applied likelihood tests to evaluate their consistencies with observed earthquakes. Both of these implementations were consistent with the observed earthquake data in space: STEP-NG performed better than STEP-LG in terms of forecast rates. More generally, we found that testing earthquake forecasts issued at regular intervals does not test the full power of clustering models, and future experiments should allow for more frequent forecasts starting at the times of triggering events.
A varying time-step explicit numerical integration algorithm for solving motion equation
Institute of Scientific and Technical Information of China (English)
ZHOU Zheng-hua; WANG Yu-huan; LIU Quan; YIN Xiao-tao; YANG Cheng
2005-01-01
If a traditional explicit numerical integration algorithm is used to solve motion equation in the finite element simulation of wave motion, the time-step used by numerical integration is the smallest time-step restricted by the stability criterion in computational region. However, the excessively small time-step is usually unnecessary for a large portion of computational region. In this paper, a varying time-step explicit numerical integration algorithm is introduced, and its basic idea is to use different time-step restricted by the stability criterion in different computational region. Finally, the feasibility of the algorithm and its effect on calculating precision are verified by numerical test.
Directory of Open Access Journals (Sweden)
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis
Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann
2017-04-01
The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for
UTC TIME STEP on the 1st of July 1997
Institute of Scientific and Technical Information of China (English)
1997-01-01
A positive leap second will be introduced in the UTC time scale UTC(JATC)、UTC (CSAO) and UTC time signals of BPL、BPM transmittings at the end of June 1997. The sequence of dates of the UTC second markers will be:
Steps towards an axiomatic pregeometry of space-time
Bergliaffa, S E P; Vucetich, H; Bergliaffa, Santiago E. Perez; Romero, Gustavo E.; Vucetich, Hector
1998-01-01
We present a deductive theory of space-time which is realistic, objective, and relational. It is realistic because it assumes the existence of physical things endowed with concrete properties. It is objective because it can be formulated without any reference to cognoscent subjects or sensorial fields. Finally, it is relational because it assumes that space-time is not a thing but a complex of relations among things. In this way, the original program of Leibniz is consummated, in the sense that space is ultimately an order of coexistents, and time is an order of succesives. In this context, we show that the metric and topological properties of Minkowskian space-time are reduced to relational properties of concrete things. We also sketch how our theory can be extended to encompass a Riemmanian space-time.
Garrett, Bruce C.; Swaminathan, P. K.; Murthy, C. S.; Redmon, Michael J.
1987-01-01
A variable time step algorithm has been implemented for solving the stochastic equations of motion for gas-surface collisions. It has been tested for a simple model of electronically inelastic collisions with an insulator surface in which the phonon manifold acts as a heat bath and electronic states are localized. In addition to reproducing the accurate nuclear dynamics of the surface atoms, numerical calculations have shown the algorithm to yield accurate ensemble averages of physical observables such as electronic transition probabilities and total energy loss of the gas atom to the surface. This new algorithm offers a gain in efficieny of up to an order of magnitude compared to fixed time step integration.
Yu, Chunxue; Yin, Xin'an; Yang, Zhifeng; Cai, Yanpeng; Sun, Tao
2016-09-01
The time step used in the operation of eco-friendly reservoirs has decreased from monthly to daily, and even sub-daily. The shorter time step is considered a better choice for satisfying downstream environmental requirements because it more closely resembles the natural flow regime. However, little consideration has been given to the influence of different time steps on the ability to simultaneously meet human and environmental flow requirements. To analyze this influence, we used an optimization model to explore the relationships among the time step, environmental flow (e-flow) requirements, and human water needs for a wide range of time steps and e-flow scenarios. We used the degree of hydrologic alteration to evaluate the regime's ability to satisfy the e-flow requirements of riverine ecosystems, and used water supply reliability to evaluate the ability to satisfy human needs. We then applied the model to a case study of China's Tanghe Reservoir. We found four efficient time steps (2, 3, 4, and 5 days), with a remarkably high water supply reliability (around 80%) and a low alteration of the flow regime (human needs under several e-flow scenarios. Our results show that adjusting the time step is a simple way to improve reservoir operation performance to balance human and e-flow needs.
Willkofer, Florian; Wood, Raul R.; Schmid, Josef; von Trentini, Fabian; Ludwig, Ralf
2016-04-01
The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. It builds on the conjoint analysis of a large ensemble of the CRCM5, driven by 50 members of the CanESM2, and the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change on the dynamics of extreme events. A critical point in the entire project is the preparation of a meteorological reference dataset with the required temporal (1-6h) and spatial (500m) resolution to be able to better evaluate hydrological extreme events in mesoscale river basins. For Bavaria a first reference data set (daily, 1km) used for bias-correction of RCM data was created by combining raster based data (E-OBS [1], HYRAS [2], MARS [3]) and interpolated station data using the meteorological interpolation schemes of the hydrological model WaSiM [4]. Apart from the coarse temporal and spatial resolution, this mosaic of different data sources is considered rather inconsistent and hence, not applicable for modeling of hydrological extreme events. Thus, the objective is to create a dataset with hourly data of temperature, precipitation, radiation, relative humidity and wind speed, which is then used for bias-correction of the RCM data being used as driver for hydrological modeling in the river basins. Therefore, daily data is disaggregated to hourly time steps using the 'Method of fragments' approach [5], based on available training stations. The disaggregation chooses fragments of daily values from observed hourly datasets, based on similarities in magnitude and behavior of previous and subsequent events. The choice of a certain reference station (hourly data, provision of fragments) for disaggregating daily station data (application
Stepped spillway optimization through numerical and physical modeling
Directory of Open Access Journals (Sweden)
Hamed Sarkardeh, Morteza Marosi, Raza Roshan
2015-01-01
Full Text Available The spillway is among the most important structures of a dam. It is importance for the spillway to be designed properly and passes flood flow safely with more energy dissipation. The zone which ogee spillway crest and stepped chute profile are joined with each other is important in design view. In the present study, a physical model as well as a numerical model was employed on a case study of stepped spillway to modify the transitional zone and improve flow pattern over the spillway. Many alternatives were examined and optimized. Finally, the performance of the selected alternative was checked for different flow conditions, air entrainment and energy dissipation. To simulate the turbulence phenomenon, RNG model and for free surface VOF model was selected in the numerical model. Results of the numerical and physical models were compared and good agreement concluded in flow conditions and energy dissipation.
Efficient vector hysteresis modeling using rotationally coupled step functions
Energy Technology Data Exchange (ETDEWEB)
Adly, A.A., E-mail: adlyamr@gmail.com; Abd-El-Hafiz, S.K., E-mail: sabdelhafiz@gmail.com
2012-05-01
Vector hysteresis models are usually used as sub-modules of field computation software tools. When dealing with a massive field computation problem, computational efficiency and practicality of such models become crucial. In this paper, generalization of a recently proposed computationally efficient vector hysteresis model based upon interacting step functions is presented. More specifically, the model is generalized to cover vector hysteresis modeling of both isotropic and anisotropic magnetic media. Model configuration details as well as experimental testing and simulation results are given in the paper.
Modified pendulum model for mean step length estimation.
González, Rafael C; Alvarez, Diego; López, Antonio M; Alvarez, Juan C
2007-01-01
Step length estimation is an important issue in areas such as gait analysis, sport training or pedestrian localization. It has been shown that the mean step length can be computed by means of a triaxial accelerometer placed near the center of gravity of the human body. Estimations based on the inverted pendulum model are prone to underestimate the step length, and must be corrected by calibration. In this paper we present a modified pendulum model in which all the parameters correspond to anthropometric data of the individual. The method has been tested with a set of volunteers, both males and females. Experimental results show that this method provides an unbiased estimation of the actual displacement with a standard deviation lower than 2.1%.
Comparison of time stepping schemes on the cable equation
Directory of Open Access Journals (Sweden)
Chuan Li
2010-09-01
Full Text Available Electrical propagation in excitable tissue, such as nerve fibers and heart muscle, is described by a parabolic PDE for the transmembrane voltage $V(x,t$, known as the cable equation, $$ frac{1}{r_a}frac{partial^2V}{partial x^2} = C_mfrac{partial V}{partial t} + I_{m ion}(V,t + I_{m stim}(t $$ where $r_a$ and $C_m$ are the axial resistance and membrane capacitance. The source term $I_{m ion}$ represents the total ionic current across the membrane, governed by the Hodgkin-Huxley or other more complicated ionic models. $I_{m stim}(t$ is an applied stimulus current.
The Creative Music Strategy: A Seven-Step Instructional Model
Robinson, Nathalie G.; Bell, Cindy L.; Pogonowski, Lenore
2011-01-01
The creative music strategy is a dynamic and flexible seven-step model for guiding general music students through the music concepts of improvisation and composition, followed by critical reflection. These are musical behaviors that cultivate the development of our students' deeper conceptual understandings and music independence by helping them…
Step-indexed Kripke models over recursive worlds
DEFF Research Database (Denmark)
Birkedal, Lars; Reus, Bernhard; Schwinghammer, Jan
2011-01-01
worlds that are recursively defined in a category of metric spaces. In this paper, we broaden the scope of this technique from the original domain-theoretic setting to an elementary, operational one based on step indexing. The resulting method is widely applicable and leads to simple, succinct models...
Problem Resolution through Electronic Mail: A Five-Step Model.
Grandgenett, Neal; Grandgenett, Don
2001-01-01
Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…
Stutter-Step Models of Performance in School
Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.
2013-01-01
To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…
Institute of Scientific and Technical Information of China (English)
Xiaoyan Li; Wei Yang
2005-01-01
A multiple time step algorithm, called reversible reference system propagator algorithm, is introduced for thelong time molecular dynamics simulation. In contrast to the conventional algorithms, the multiple time method has better convergence, stability and efficiency. The method is validated by simulating free relaxation and the hypervelocity impact of nano-clusters. The time efficiency of the multiple time step method enables us to investigate the long time interaction between lattice dislocations and low-angle grain boundaries.
Electrifying emissions : reducing GHG emissions one step at a time
Energy Technology Data Exchange (ETDEWEB)
Macedo, R.
2010-05-15
Four waste heat generation systems were installed at compressor stations in Saskatchewan as part of SaskPower's effort to reduce greenhouse gas emissions and produce electricity in an environmentally responsible manner. NRGreen Power Limited Partnership, a sister company of Alliance Pipeline Ltd., was awarded the contract to construct the 4 waste heat power generation units at compressor stations along Alliance's natural gas pipeline system. Using technology developed and manufactured by Ormat Technologies, Inc., the waste heat units recover the exhaust heat from natural gas turbines, which compress the gas to transport it through the pipeline, and convert it into electricity. Each unit produces 5 megawatts of power, enough energy to power approximately 5,000 homes. The units are located in Kerrobert, Estlin, Loreburn and Alameda, Saskatchewan. The 3 main components of the units are the heat exchanger, a thermal oil loop and an energy converter. A unique feature of the waste heat power generation system is that it is entirely self-contained. The compressors on the Alliance system operate about 99 per cent of the time with a high degree of reliability, which is key for an electricity provider. This same technology could be applied to other jurisdictions where the Alliance pipeline crosses. NRGreen is also proposing to build waste heat power generation units at 3 of its existing compressor stations in Alberta. NRGreen has regulatory approval to install the units at Irma, Morinville and Windfall. Although opportunities may arise in the United States, challenges remain in getting the technology recognized as environmentally preferred or equivalent to other renewable sources. 1 ref., 2 figs.
Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)
2004-01-01
textabstractThis book considers periodic time series models for seasonal data, characterized by parameters that differ across the seasons, and focuses on their usefulness for out-of-sample forecasting. Providing an up-to-date survey of the recent developments in periodic time series, the book
Performance of a Voltage Step-Up/Step-Down Transformerless DC/DC Converter: Analytical Model
Suskis, P.; Rankis, I.
2012-01-01
The authors present an analytical model for a voltage step-up/step-down DC/DC converter without transformers. The proposed topology is a combination of classic buck and boost converters in one single circuit but with differing operational principles. The converter is developed for a wind power autonomous supply system equipped with a hydrogen electrolytic tank and a fuel cell for energy stabilization. The main power source of the hydrogen-based autonomous supply system is energized by a synchronous generator operating on permanent magnets and equipped with a diode bridge. The input voltage of the converter in this case varies in the range 0-700 V, while its output DC voltage must be 540 V according to the demand of other parts of the system. To maintain the rated voltage, a special electrical load regulation is introduced. The calculations of the converter, the generator (equipped with a diode bridge) as element of the power system supply joint, and the load replaced by resistance are verified with PSIM software.
Directory of Open Access Journals (Sweden)
Po Hu
2016-01-01
Full Text Available Implementing real-time machining process control at shop floor has great significance on raising the efficiency and quality of product manufacturing. A framework and implementation methods of real-time machining process control based on STEP-NC are presented in this paper. Data model compatible with ISO 14649 standard is built to transfer high-level real-time machining process control information between CAPP systems and CNC systems, in which EXPRESS language is used to define new STEP-NC entities. Methods for implementing real-time machining process control at shop floor are studied and realized on an open STEP-NC controller, which is developed using object-oriented, multithread, and shared memory technologies conjunctively. Cutting force at specific direction of machining feature in side mill is chosen to be controlled object, and a fuzzy control algorithm with self-adjusting factor is designed and embedded in the software CNC kernel of STEP-NC controller. Experiments are carried out to verify the proposed framework, STEP-NC data model, and implementation methods for real-time machining process control. The results of experiments prove that real-time machining process control tasks can be interpreted and executed correctly by the STEP-NC controller at shop floor, in which actual cutting force is kept around ideal value, whether axial cutting depth changes suddenly or continuously.
Critical time step for a bilinear laminated composite Mindlin shell element.
Energy Technology Data Exchange (ETDEWEB)
Hammerand, Daniel Carl
2004-06-01
The critical time step needed for explicit time integration of laminated shell finite element models is presented. Each layer is restricted to be orthotropic when viewed from a properly oriented material coordinate system. Mindlin shell theory is used in determining the laminated response that includes the effects of transverse shear. The effects of the membrane-bending coupling matrix from the laminate material model are included. Such a coupling matrix arises even in the case of non-symmetric lay-ups of differing isotropic layers. Single point integration is assumed to be used in determining a uniform strain response from the element. Using a technique based upon one from the literature, reduced eigenvalue problems are established to determine the remaining non-zero frequencies. It is shown that the eigenvalue problem arising from the inplane normal and shear stresses is decoupled from that arising from the transverse shear stresses. A verification example is presented where the exact and approximate results are compared.
First attempt to overcome the disaster of Dirac sea in imaginary time step method
Institute of Scientific and Technical Information of China (English)
ZHANG Ying; LIANG Hao-Zhao; MENG Jie
2009-01-01
Efforts have been made to solve the Dirac equation with axially deformed scalar and vector Woods-Saxon potentials in the coordinate space with the imaginary time step method. The results of the single-particle energies thus obtained are consistent with those calculated with the basis expansion method, which demonstrates the feasibility of the imaginary time step method for the relativistic static problems.
Step Flow Model of Radial Growth and Shape Evolution of Semiconductor Nanowires
Filimonov, S. N.; Hervieu, Yu. Yu.
2016-12-01
A model of radial growth of vertically aligned nanowires (NW) via formation and propagation of monoatomic steps at nanowire sidewalls is developed. The model allows to describe self-consistently the step dynamics and the axial growth of the NW. It is shown that formation of NWs with an abrupt change of wire diameter and a non-tapered section at the top might be explained by the bunching of sidewall steps due to the presence of a strong sink for adatoms at the NW top. The Ehrlich-Schwoebel barrier for the attachment of adatoms to the descending step favors the step bunching at the beginning of the radial growth and promotes the decay of the bunch at a later time of the NW growth.
A proposed health model: a step before model confirmation.
Gauff, J F
1992-01-01
Health marketers have devoted extensive conceptual and empirical effort toward explaining and predicting individuals' health-related decisions. This paper proposes a health behavior model by combining the health belief model and the theory of planned behavior model. Recent modifications of the Fishbein and Ajzen (1975) model are discussed and an extension is introduced to better explain goal pursuit. These revisions (Bagozzi and Warshaw 1990) are incorporated in the proposed model.
Semiletov, V. A.; Karabasov, S. A.
2013-11-01
Explicit time stepping renders many high-resolution computational schemes to become less efficient when dealing with non-uniform grids typical of many aeroacoustic applications. Asynchronous time stepping, i.e., updating the solution in different cell sizes according to their local rates, is known to be a promising way to improve the efficiency of explicit time-stepping methods without compromise in accuracy. In the present paper, a new asynchronous time-stepping algorithm is developed for the Compact Accurately Boundary-Adjusting high-REsolution Technique (CABARET) Euler method. This allows to significantly speedup the original single-step CABARET method with non-uniform grids and improves its accuracy at the same time. Numerical examples are provided and issues associated with the method performance on various grid resolutions are discussed.
A time step criterion for the stable numerical simulation of hydraulic fracturing
Juan-Lien Ramirez, Alina; Löhnert, Stefan; Neuweiler, Insa
2017-04-01
The process of propagating or widening cracks in rock formations by means of fluid flow, known as hydraulic fracturing, has been gaining attention in the last couple of decades. There is growing interest in its numerical simulation to make predictions. Due to the complexity of the processes taking place, e.g. solid deformation, fluid flow in an open channel, fluid flow in a porous medium and crack propagation, this is a challenging task. Hydraulic fracturing has been numerically simulated for some years now [1] and new methods to take more of its processes into account (increasing accuracy) while modeling in an efficient way (lower computational effort) have been developed in recent years. An example is the use of the Extended Finite Element Method (XFEM), whose application originated within the framework of solid mechanics, but is now seen as an effective method for the simulation of discontinuities with no need for re-meshing [2]. While more focus has been put to the correct coupling of the processes mentioned above, less attention has been paid to the stability of the model. When using a quasi-static approach for the simulation of hydraulic fracturing, choosing an adequate time step is not trivial. This is in particular true if the equations are solved in a staggered way. The difficulty lies within the inconsistency between the static behavior of the solid and the dynamic behavior of the fluid. It has been shown that too small time steps may lead to instabilities early into the simulation time [3]. While the solid reaches a stationary state instantly, the fluid is not able to achieve equilibrium with its new surrounding immediately. This is why a time step criterion has been developed to quantify the instability of the model concerning the time step. The presented results were created with a 2D poroelastic model, using the XFEM for both the solid and the fluid phases. An embedded crack propagates following the energy release rate criteria when the fluid pressure
Algorithms and Data Structures for Multi-Adaptive Time-Stepping
Jansson, Johan
2012-01-01
Multi-adaptive Galerkin methods are extensions of the standard continuous and discontinuous Galerkin methods for the numerical solution of initial value problems for ordinary or partial differential equations. In particular, the multi-adaptive methods allow individual and adaptive time steps to be used for different components or in different regions of space. We present algorithms for efficient multi-adaptive time-stepping, including the recursive construction of time slabs and adaptive time step selection. We also present data structures for efficient storage and interpolation of the multi-adaptive solution. The efficiency of the proposed algorithms and data structures is demonstrated for a series of benchmark problems.
Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems
Majumdar, Alok K.; Ravindran, S. S.
2017-01-01
Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Institute of Scientific and Technical Information of China (English)
傅为农; 江建中
2000-01-01
The geometrical feature of the skewed rotor slots in induction motors makes the 2-dimensional (2-D) finite element method (FEM) not directly applicable. Based on the multi-slice model in this paper, a time stepping 2-D eddy-current FEM is described to study the steady-state operation and the starting process of induction machines with skewed rotor slots. The fields of the multi-slices are solved in parallel, and thus the effects of skewed slots and eddy-current can be taken into account directly. The basic formulas for the multi-slice model are derived. Special technique to reduce computation time in solving the coupled system equations is also described. The results obtained by using the program developed have very good correlation with the test data.
Convergence for Imaginary Time Step evolution in the Fermi and Dirac seas
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The convergence for the Imaginary Time Step (ITS) evolution with time step is investigated by performing the ITS evolution for the Schrdinger-like equation and the charge-conjugate Schrdinger-like equation deduced from Dirac equation for the single proton levels of 12C in both the Fermi and Dirac seas. For the guaranteed convergence of the ITS evolution to the "exact" results,the time step should be smaller than a "critical" time step Δtc for a given single-particle level. The "critical" time step Δtc is more sensitive to the quantum numbers |κ| than to the energy of the single-particle level. For the single-particle levels with the same κ,their "critical" time steps are in the same order. For the single-particle levels with similar energy,a relatively small (large) "critical" time step for larger (smaller) |κ| is needed. These conclusions can be used in the future self-consistent calculation to optimize the evolution procedure.
Model-based risk analysis of coupled process steps.
Westerberg, Karin; Broberg-Hansen, Ernst; Sejergaard, Lars; Nilsson, Bernt
2013-09-01
A section of a biopharmaceutical manufacturing process involving the enzymatic coupling of a polymer to a therapeutic protein was characterized with regards to the process parameter sensitivity and design space. To minimize the formation of unwanted by-products in the enzymatic reaction, the substrate was added in small amounts and unreacted protein was separated using size-exclusion chromatography (SEC) and recycled to the reactor. The quality of the final recovered product was thus a result of the conditions in both the reactor and the SEC, and a design space had to be established for both processes together. This was achieved by developing mechanistic models of the reaction and SEC steps, establishing the causal links between process conditions and product quality. Model analysis was used to complement the qualitative risk assessment, and design space and critical process parameters were identified. The simulation results gave an experimental plan focusing on the "worst-case regions" in terms of product quality and yield. In this way, the experiments could be used to verify both the suggested process and the model results. This work demonstrates the necessary steps of model-assisted process analysis, from model development through experimental verification.
An optimal adaptive time-stepping scheme for solving reaction-diffusion-chemotaxis systems.
Chiu, Chichia; Yu, Jui-Ling
2007-04-01
Reaction-diffusion-chemotaxis systems have proven to be fairly accurate mathematical models for many pattern formation problems in chemistry and biology. These systems are important for computer simulations of patterns, parameter estimations as well as analysis of the biological systems. To solve reaction-diffusion-chemotaxis systems, efficient and reliable numerical algorithms are essential for pattern generations. In this paper, a general reaction-diffusion-chemotaxis system is considered for specific numerical issues of pattern simulations. We propose a fully explicit discretization combined with a variable optimal time step strategy for solving the reaction-diffusion-chemotaxis system. Theorems about stability and convergence of the algorithm are given to show that the algorithm is highly stable and efficient. Numerical experiment results on a model problem are given for comparison with other numerical methods. Simulations on two real biological experiments will also be shown.
Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET
Directory of Open Access Journals (Sweden)
B. Ghahraman
2016-02-01
Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0
Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs
Hadjimichael, Yiannis
2017-09-30
A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions
Numerical stability analysis of an acceleration scheme for step size constrained time integrators
Vandekerckhove, Christophe; Roose, Dirk; Lust, Kurt
2007-01-01
Time integration schemes with a fixed time step, much smaller than the dominant slow time scales of the dynamics of the system, arise in the context of stiff ordinary differential equations or in multiscale computations, where a microscopic time-stepper is used to compute macroscopic behaviour. We d
One-step ahead prediction of foF2 using time series forecasting techniques
Directory of Open Access Journals (Sweden)
A. Belehaki
2005-11-01
Full Text Available In this paper the problem of one-step ahead prediction of the critical frequency (foF2 of the middle-latitude ionosphere, using time series forecasting methods, is considered. The whole study is based on a sample of about 58000 observations of foF2 with 15-min time resolution, derived from the Athens digisonde ionograms taken from the Digisonde Portable Sounder (DPS4 located at Palaia Penteli (38° N, 23.5° E, for the period from October 2002 to May 2004. First, the embedding dimension of the dynamical system that generates the above sample is estimated using the false nearest neighbor method. This information is then utilized for the training of the predictors employed in this study, which are the linear predictor, the neural network predictor, the persistence predictor and the k-nearest neighbor predictor. The results obtained by the above predictors suggest that, as far as the mean square error is considered as performance criterion, the first two predictors are significantly better than the latter two predictors. In addition, the results obtained by the linear and the neural network predictors are not significantly different from each other. This may be taken as an indication that a linear model suffices for one step ahead prediction of foF2.
Diagnostic and Prognostic Models for Generator Step-Up Transformers
Energy Technology Data Exchange (ETDEWEB)
Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham
2014-09-01
In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.
Single-Step Tunable Group Delay Phaser for Real-Time Spectrum Sniffing
Guo, Tongfeng; Chen, Yifan; Wang, Rui; Caloz, Christophe
2015-01-01
This paper presents a single-step tunable group delay phaser for spectrum sniffing. This device may be seen as a "time filter", where frequencies are suppressed by time separation rather than by spectral attenuation. Compared to its multiple-step counterpart, this phaser features higher processing resolution, greater simplicity, lower loss and better channel equalization, due to the smaller and channel-independent group delay swing. A three-channel example is provided for illustration.
Guermond, J.-L.
2011-01-01
In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
Miki, Yohei
2016-01-01
The tree method is a widely implemented algorithm for collisionless $N$-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate $N$-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named \\texttt{GOTHIC}, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3--5 times compared to the shared time step...
Modelling a New Product Model on the Basis of an Existing STEP Application Protocol
Directory of Open Access Journals (Sweden)
B.-R. Hoehn
2005-01-01
Full Text Available During the last years a great range of computer aided tools has been generated to support the development process of various products. The goal of a continuous data flow, needed for high efficiency, requires powerful standards for the data exchange. At the FZG (Gear Research Centre of the Technical University of Munich there was a need for a common gear data format for data exchange between gear calculation programs. The STEP standard ISO 10303 was developed for this type of purpose, but a suitable definition of gear data was still missing, even in the Application Protocol AP 214, developed for the design process in the automotive industry. The creation of a new STEP Application Protocol or the extension of existing protocol would be a very time consumpting normative process. So a new method was introduced by FZG. Some very general definitions of an Application Protocol (here AP 214 were used to determine rules for an exact specification of the required kind of data. In this case a product model for gear units was defined based on elements of the AP 214. Therefore no change of the Application Protocol is necessary. Meanwhile the product model for gear units has been published as a VDMA paper and successfully introduced for data exchange within the German gear industry associated with FVA (German Research Organisation for Gears and Transmissions. This method can also be adopted for other applications not yet sufficiently defined by STEP.
Development of real time diagnostics and feedback algorithms for JET in view of the next step
Energy Technology Data Exchange (ETDEWEB)
Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)
2004-07-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Continuum Limit of a Mesoscopic Model with Elasticity of Step Motion on Vicinal Surfaces
Gao, Yuan; Liu, Jian-Guo; Lu, Jianfeng
2016-12-01
This work considers the rigorous derivation of continuum models of step motion starting from a mesoscopic Burton-Cabrera-Frank-type model following the Xiang's work (Xiang in SIAM J Appl Math 63(1):241-258, 2002). We prove that as the lattice parameter goes to zero, for a finite time interval, a modified discrete model converges to the strong solution of the limiting PDE with first-order convergence rate.
A multi-step standard-cell placement algorithm of optimizing timing and congestion behavior
Institute of Scientific and Technical Information of China (English)
侯文婷; 洪先龙; 吴为民; 蔡懿慈
2002-01-01
The timing behavior and congestion behavior are two important goals in the performance-driven standard-cell placement. In this paper, we analyze the relationship between the timing and congestion behavior. We bring up a multi-step placement algorithm to reach the two goals. First, the timing-driven placement algorithm is used to find the global optimal solution. In the second step, the algorithm tries to decrease the maximum congestion while not deteriorating the timing behavior. We have implemented our algorithm and tested it with real circuits. The results show that the maximum delay can decrease by 30% in our timing-driven placement and in the second step the maximum congestion will decrease by 10% while the timing behavior is unchanged.
Mei, Hua; Li, Shaoyuan
2007-04-01
In order to identify those multivariable processes with integrating factors in their transfer function matrices, a simple yet robust decentralized identification method from closed-loop step tests is proposed. By the frequency response matrix computed from the closed-loop system data and the knowledge of the decentralized controller, the structural information of the multivariable integrating process is determined firstly and then the continuous parametric model with dead times is approximated similarly with the parameterization of the open-loop stable process. Computer simulations and an application to a 3 x 3 integrating multiple-tank water level system verify the validation of the proposed method even if the closed-loop system is affected by some stochastic noise sources.
Electric and hybrid electric vehicle study utilizing a time-stepping simulation
Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.
1992-01-01
The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.
Gap timing and the spectral timing model.
Hopson, J W
1999-04-01
A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.
Halsey, Lewis G; Watkins, David A R; Duggan, Brendan M
2012-01-01
Stairway climbing provides a ubiquitous and inconspicuous method of burning calories. While typically two strategies are employed for climbing stairs, climbing one stair step per stride or two steps per stride, research to date has not clarified if there are any differences in energy expenditure between them. Fourteen participants took part in two stair climbing trials whereby measures of heart rate were used to estimate energy expenditure during stairway ascent at speeds chosen by the participants. The relationship between rate of oxygen consumption ([Formula: see text]) and heart rate was calibrated for each participant using an inclined treadmill. The trials involved climbing up and down a 14.05 m high stairway, either ascending one step per stride or ascending two stair steps per stride. Single-step climbing used 8.5±0.1 kcal min(-1), whereas double step climbing used 9.2±0.1 kcal min(-1). These estimations are similar to equivalent measures in all previous studies, which have all directly measured [Formula: see text] The present study findings indicate that (1) treadmill-calibrated heart rate recordings can be used as a valid alternative to respirometry to ascertain rate of energy expenditure during stair climbing; (2) two step climbing invokes a higher rate of energy expenditure; however, one step climbing is energetically more expensive in total over the entirety of a stairway. Therefore to expend the maximum number of calories when climbing a set of stairs the single-step strategy is better.
Directory of Open Access Journals (Sweden)
Lewis G Halsey
Full Text Available Stairway climbing provides a ubiquitous and inconspicuous method of burning calories. While typically two strategies are employed for climbing stairs, climbing one stair step per stride or two steps per stride, research to date has not clarified if there are any differences in energy expenditure between them. Fourteen participants took part in two stair climbing trials whereby measures of heart rate were used to estimate energy expenditure during stairway ascent at speeds chosen by the participants. The relationship between rate of oxygen consumption ([Formula: see text] and heart rate was calibrated for each participant using an inclined treadmill. The trials involved climbing up and down a 14.05 m high stairway, either ascending one step per stride or ascending two stair steps per stride. Single-step climbing used 8.5±0.1 kcal min(-1, whereas double step climbing used 9.2±0.1 kcal min(-1. These estimations are similar to equivalent measures in all previous studies, which have all directly measured [Formula: see text] The present study findings indicate that (1 treadmill-calibrated heart rate recordings can be used as a valid alternative to respirometry to ascertain rate of energy expenditure during stair climbing; (2 two step climbing invokes a higher rate of energy expenditure; however, one step climbing is energetically more expensive in total over the entirety of a stairway. Therefore to expend the maximum number of calories when climbing a set of stairs the single-step strategy is better.
High order single step time delay compensation algorithm for structural active control
Institute of Scientific and Technical Information of China (English)
王焕定; 耿淑伟; 王伟
2002-01-01
The optimal instantaneous high order single step algorithm for active control is first discussed andthen, the n + 1 time step controlling force vector of the instantaneous optimal algorithm is derived from way of ntime state vector. An estimating algorithm, is developed from this to solve the problem of active control withtime delay compensation. The estimating algorithm based on this high order single step β method (HSM) foun-dation, is proven by simulation and experiment analysis, to be a valid solution to problem of active control withtime delay compensation.
Sina Askari, MS; TeKang Chao; Ray D. de Leon, PhD; Deborah S. Won, PhD
2013-01-01
Results of previous studies raise the question of how timing neuromuscular functional electrical stimulation (FES) to limb movements during stepping might alter neuromuscular control differently than patterned stimulation alone. We have developed a prototype FES system for a rodent model of spinal cord injury (SCI) that times FES to robotic treadmill training (RTT). In this study, one group of rats (n = 6) was trained with our FES+RTT system and received stimulation of the ankle flexor (tibia...
Resuscitator’s perceptions and time for corrective ventilation steps during neonatal resuscitation☆
Sharma, Vinay; Lakshminrusimha, Satyan; Carrion, Vivien; Mathew, Bobby
2016-01-01
Background The 2010 neonatal resuscitation program (NRP) guidelines incorporate ventilation corrective steps (using the mnemonic – MRSOPA) into the resuscitation algorithm. The perception of neonatal providers, time taken to perform these maneuvers or the effectiveness of these additional steps has not been evaluated. Methods Using two simulated clinical scenarios of varying degrees of cardiovascular compromise –perinatal asphyxia with (i) bradycardia (heart rate – 40 min−1) and (ii) cardiac arrest, 35 NRP certified providers were evaluated for preference to performing these corrective measures, the time taken for performing these steps and time to onset of chest compressions. Results The average time taken to perform ventilation corrective steps (MRSOPA) was 48.9 ± 21.4 s. Providers were less likely to perform corrective steps and proceed directly to endotracheal intubation in the scenario of cardiac arrest as compared to a state of bradycardia. Cardiac compressions were initiated significantly sooner in the scenario of cardiac arrest 89 ± 24 s as compared to severe bradycardia 122 ± 23 s, p < 0.0001. There were no differences in the time taken to initiation of chest compressions between physicians or mid-level care providers or with the level of experience of the provider. Conclusions Effective ventilation of the lungs with corrective steps using a mask is important in most cases of neonatal resuscitation. Neonatal resuscitators prefer early endotracheal intubation and initiation of chest compressions in the presence of asystolic cardiac arrest. Corrective ventilation steps can potentially postpone initiation of chest compressions and may delay return of spontaneous circulation in the presence of severe cardiovascular compromise. PMID:25796996
Boosting the accuracy and speed of quantum Monte Carlo: size-consistency and time-step
Zen, Andrea; Gillan, Michael J; Michaelides, Angelos; Alfè, Dario
2016-01-01
Diffusion Monte Carlo (DMC) simulations for fermions are becoming the standard to provide high quality reference data in systems that are too large to be investigated via quantum chemical approaches. DMC with the fixed-node approximation relies on modifications of the Green function to avoid singularities near the nodal surface of the trial wavefunction. We show that these modifications affect the DMC energies in a way that is not size-consistent, resulting in large time-step errors. Building on the modifications of Umrigar {\\em et al.} and of DePasquale {\\em et al.} we propose a simple Green function modification that restores size-consistency to large values of time-step; substantially reducing the time-step errors. The new algorithm also yields remarkable speedups of up to two orders of magnitude in the calculation of molecule-molecule binding energies and crystal cohesive energies, thus extending the horizons of what is possible with DMC.
Exponential time-differencing with embedded Runge–Kutta adaptive step control
Energy Technology Data Exchange (ETDEWEB)
Whalen, P.; Brio, M.; Moloney, J.V.
2015-01-01
We have presented the first embedded Runge–Kutta exponential time-differencing (RKETD) methods of fourth order with third order embedding and fifth order with third order embedding for non-Rosenbrock type nonlinear systems. A procedure for constructing RKETD methods that accounts for both order conditions and stability is outlined. In our stability analysis, the fast time scale is represented by a full linear operator in contrast to particular scalar cases considered before. An effective time-stepping strategy based on reducing both ETD function evaluations and rejected steps is described. Comparisons of performance with adaptive-stepping integrating factor (IF) are carried out on a set of canonical partial differential equations: the shock-fronts of Burgers equation, interacting KdV solitons, KS controlled chaos, and critical collapse of two-dimensional NLS.
Brown, V B; Melchior, L A; Panter, A T; Slaughter, R; Huba, G J
2000-04-01
The Transtheoretical, or Stages of Change Model, has been applied to the investigation of help-seeking related to a number of addictive behaviors. Overall, the model has shown to be very important in understanding the process of help-seeking. However, substance abuse rarely exists in isolation from other health, mental health, and social problems. The present work extends the original Stages of Change Model by proposing "Steps of Change" as they relate to entry into substance abuse treatment programs for women. Readiness to make life changes in four domains-domestic violence, HIV sexual risk behavior, substance abuse, and mental health-is examined in relation to entry into four substance abuse treatment modalities (12-step, detoxification, outpatient, and residential). The Steps of Change Model hypothesizes that help-seeking behavior of substance-abusing women may reflect a hierarchy of readiness based on the immediacy, or time urgency, of their treatment issues. For example, women in battering relationships may be ready to make changes to reduce their exposure to violence before admitting readiness to seek substance abuse treatment. The Steps of Change Model was examined in a sample of 451 women contacted through a substance abuse treatment-readiness program in Los Angeles, California. A series of logistic regression analyses predict entry into four separate treatment modalities that vary. Results suggest a multidimensional Stages of Change Model that may extend to other populations and to other types of help-seeking behaviors.
Single-step stereolithography of complex anatomical models for optical flow measurements.
de Zélicourt, Diane; Pekkan, Kerem; Kitajima, Hiroumi; Frakes, David; Yoganathan, Ajit P
2005-02-01
Transparent stereolithographic rapid prototyping (RP) technology has already demonstrated in literature to be a practical model construction tool for optical flow measurements such as digital particle image velocimetry (DPIV), laser doppler velocimetry (LDV), and flow visualization. Here, we employ recently available transparent RP resins and eliminate time-consuming casting and chemical curing steps from the traditional approach. This note details our methodology with relevant material properties and highlights its advantages. Stereolithographic model printing with our procedure is now a direct single-step process, enabling faster geometric replication of complex computational fluid dynamics (CFD) models for exact experimental validation studies. This methodology is specifically applied to the in vitro flow modeling of patient-specific total cavopulmonary connection (TCPC) morphologies. The effect of RP machining grooves, surface quality, and hydrodynamic performance measurements as compared with the smooth glass models are also quantified.
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea
2014-10-31
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Simulation step size analysis of a whole-cell computational model of bacteria
Abreu, Raphael; Castro, Maria Clicia S.; Silva, Fabrício Alves B.
2016-12-01
Understanding how complex phenotypes arise from individual molecules and their interactions is a major challenge in biology and, to meet this challenge, computational approaches are increasingly employed. As an example, a recent paper [1] proposed a whole-cell model Mycoplasma genitalium including all cell components and their interactions. 28 modules representing several cell functions were modeled independently, and then integrated into a single computational model. One assumption considered in the whole-cell model of M.Genitalium is that all 28 modules can be modeled independently given the 1 second step size used in simulations. This is a major assumption, since it simplifies the modeling of several cell functions and makes the modeling of the system as a whole feasible. In this paper we investigate the dependency of experimental results on that assumption. We have simulated the M.Genitalium cell cycle using several simulation time step sizes and compared the results to the ones obtained with the system using 1 second simulation time step.
Step-by-Step Model for the Study of the Apriori Algorithm for Predictive Analysis
Directory of Open Access Journals (Sweden)
Daniel Grigore ROŞCA
2015-06-01
Full Text Available The goal of this paper was to develop an educational oriented application based on the Data Mining Apriori Algorithm which facilitates both the research and the study of data mining by graduate students. The application could be used to discover interesting patterns in the corpus of data and to measure the impact on the speed of execution as a function of problem constraints (value of support and confidence variables or size of the transactional data-base. The paper presents a brief overview of the Apriori Algorithm, aspects about the implementation of the algorithm using a step-by-step process, a discussion of the education-oriented user interface and the process of data mining of a test transactional data base. The impact of some constraints on the speed of the algorithm is also experimentally measured without a systematic review of different approaches to increase execution speed. Possible applications of the implementation, as well as its limits, are briefly reviewed.
A permeation theory for single-file ion channels: One- and two-step models
Nelson, Peter Hugo
2011-04-01
How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no
Zheng, F.
2011-01-01
Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain tr
Zheng, F.
2011-01-01
Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain
Zheng, F.
2011-01-01
Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain tr
Infinite Time Cellular Automata: A Real Computation Model
Givors, Fabien; Ollinger, Nicolas
2010-01-01
We define a new transfinite time model of computation, infinite time cellular automata. The model is shown to be as powerful than infinite time Turing machines, both on finite and infinite inputs; thus inheriting many of its properties. We then show how to simulate the canonical real computation model, BSS machines, with infinite time cellular automata in exactly \\omega steps.
Evaluating Bank Profitability in Ghana: A five step Du-Pont Model Approach
Directory of Open Access Journals (Sweden)
Baah Aye Kusi
2015-09-01
Full Text Available We investigate bank profitability in Ghana using periods before, during and after the globe financial crises with the five step du-pont model for the first time.We adapt the variable of the five step du-pont model to explain bank profitability with a panel data of twenty-five banks in Ghana from 2006 to 2012. To ensure meaningful generalization robust errors fixed and random effects models are used.Our empirical results suggests that bank operating activities (operating profit margin, bank efficiency (asset turnover, bank leverage (asset to equity and financing cost (interest burden were positive and significant determinants of bank profitability (ROE during the period of study implying that bank in Ghana can boost return to equity holders through the above mentioned variables. We further report that the five step du-pont model better explains the total variation (94% in bank profitability in Ghana as compared to earlier findings suggesting that bank specific variables are keen in explaining ROE in banks in Ghana.We cited no empirical study that has employed five step du-pont model making our study unique and different from earlier studies as we assert that bank specific variables are core to explaining bank profitability.
Double loop control strategy with different time steps based on human characteristics.
Gu, Gwang Min; Lee, Jinoh; Kim, Jung
2012-01-01
This paper proposes a cooperative control strategy in consideration of the force sensitivity of human. The strategy consists of two loops: one is the intention estimation loop whose sampling time can be variable in order to investigate the effect of the sampling time; the other is the position control loop with fixed time step. A high sampling rate is not necessary for the intention estimation loop due to the bandwidth of the mechanoreceptors in humans. In addition, the force sensor implemented in the robot is sensitive to the noise induced from the sensor itself and tremor of the human. Multiple experiments were performed with the experimental protocol using various time steps of the intention estimation loop to find the suitable sampling times in physical human robot interaction. The task involves pull-and-push movement with a two-degree-of-freedom robot, and the norm of the interaction force was obtained for each experiment as the measure of the cooperative control performance.
Simulating diffusion processes in discontinuous media: A numerical scheme with constant time steps
Energy Technology Data Exchange (ETDEWEB)
Lejay, Antoine, E-mail: Antoine.Lejay@iecn.u-nancy.fr [Universite de Lorraine, IECN, UMR 7502, Vandoeuvre-les-Nancy, F-54500 (France); CNRS, IECN, UMR 7502, Vandoeuvre-les-Nancy, F-54500 (France); Inria, Villers-les-Nancy, F-54600 (France); IECN, BP 70238, F-54506 Vandoeuvre-les-Nancy Cedex (France); Pichot, Geraldine, E-mail: Geraldine.Pichot@inria.fr [Inria, Rennes - Bretagne Atlantique, Campus de Beaulieu, 35042 Rennes Cedex (France); INRIA, Campus de Beaulieu, 35042 Rennes Cedex (France)
2012-08-30
In this article, we propose new Monte Carlo techniques for moving a diffusive particle in a discontinuous media. In this framework, we characterize the stochastic process that governs the positions of the particle. The key tool is the reduction of the process to a Skew Brownian motion (SBM). In a zone where the coefficients are locally constant on each side of the discontinuity, the new position of the particle after a constant time step is sampled from the exact distribution of the SBM process at the considered time. To do so, we propose two different but equivalent algorithms: a two-steps simulation with a stop at the discontinuity and a one-step direct simulation of the SBM dynamic. Some benchmark tests illustrate their effectiveness.
Directory of Open Access Journals (Sweden)
Sina Askari, MS
2013-08-01
Full Text Available Results of previous studies raise the question of how timing neuromuscular functional electrical stimulation (FES to limb movements during stepping might alter neuromuscular control differently than patterned stimulation alone. We have developed a prototype FES system for a rodent model of spinal cord injury (SCI that times FES to robotic treadmill training (RTT. In this study, one group of rats (n = 6 was trained with our FES+RTT system and received stimulation of the ankle flexor (tibialis anterior [TA] muscle timed according to robot-controlled hind-limb position (FES+RTT group; a second group (n = 5 received a similarly patterned stimulation, randomly timed with respect to the rats’ hind-limb movements, while they were in their cages (randomly timed stimulation [RS] group. After 4 wk of training, we tested treadmill stepping ability and compared kinematic measures of hind-limb movement and electromyography (EMG activity in the TA. The FES+RTT group stepped faster and exhibited TA EMG profiles that better matched the applied stimulation profile during training than the RS group. The shape of the EMG profile was assessed by "gamma," a measure that quantified the concentration of EMG activity during the early swing phase of the gait cycle. This gamma measure was 112% higher for the FES+RTT group than for the RS group. The FES+RTT group exhibited burst-to-step latencies that were 41% shorter and correspondingly exhibited a greater tendency to perform ankle flexion movements during stepping than the RS group, as measured by the percentage of time the hind limb was either dragging or in withdrawal. The results from this study support the hypothesis that locomotor training consisting of FES timed to hind-limb movement improves the activation of hind-limb muscle more so than RS alone. Our rodent FES+RTT system can serve as a tool to help further develop this combined therapy to target appropriate neurophysiological changes for locomotor control.
Askari, Sina; Chao, TeKang; de Leon, Ray D; Won, Deborah S
2013-01-01
Results of previous studies raise the question of how timing neuromuscular functional electrical stimulation (FES) to limb movements during stepping might alter neuromuscular control differently than patterned stimulation alone. We have developed a prototype FES system for a rodent model of spinal cord injury (SCI) that times FES to robotic treadmill training (RTT). In this study, one group of rats (n = 6) was trained with our FES+RTT system and received stimulation of the ankle flexor (tibialis anterior [TA]) muscle timed according to robot-controlled hind-limb position (FES+RTT group); a second group (n = 5) received a similarly patterned stimulation, randomly timed with respect to the rats' hind-limb movements, while they were in their cages (randomly timed stimulation [RS] group). After 4 wk of training, we tested treadmill stepping ability and compared kinematic measures of hind-limb movement and electromyography (EMG) activity in the TA. The FES+RTT group stepped faster and exhibited TA EMG profiles that better matched the applied stimulation profile during training than the RS group. The shape of the EMG profile was assessed by "gamma," a measure that quantified the concentration of EMG activity during the early swing phase of the gait cycle. This gamma measure was 112% higher for the FES+RTT group than for the RS group. The FES+RTT group exhibited burst-to-step latencies that were 41% shorter and correspondingly exhibited a greater tendency to perform ankle flexion movements during stepping than the RS group, as measured by the percentage of time the hind limb was either dragging or in withdrawal. The results from this study support the hypothesis that locomotor training consisting of FES timed to hind-limb movement improves the activation of hind-limb muscle more so than RS alone. Our rodent FES+RTT system can serve as a tool to help further develop this combined therapy to target appropriate neurophysiological changes for locomotor control.
Introduction to Time Series Modeling
Kitagawa, Genshiro
2010-01-01
In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f
Curriculum Type and Sophomore Students' Preparation Time for the USMLE Step 1 Examination.
Richards, Boyd F.; Cariaga-Lo, Liza
1994-01-01
Seventeen medical students in a problem-based learning (PBL) curriculum reported that on average they spent twice as much time preparing for step 1 of the U.S. Medical Licensing Examination as did 52 students in the traditional lecture-based curriculum at the same school. Different learning approaches were also employed. (SLD)
Multi time-step wave-front reconstruction for tomographic Adaptive-Optics systems
Ono, Yoshito H; Oya, Shin; Lardiere, Olivier; Andersen, David R; Correia, Carlos; Jackson, Kate; Bradley, Colin
2016-01-01
In tomographic adaptive-optics (AO) systems, errors due to tomographic wave-front reconstruction limit the performance and angular size of the scientific field of view (FoV), where AO correction is effective. We propose a multi time-step tomographic wave-front reconstruction method to reduce the tomographic error by using the measurements from both the current and the previous time-steps simultaneously. We further outline the method to feed the reconstructor with both wind speed and direction of each turbulence layer. An end-to-end numerical simulation, assuming a multi-object AO (MOAO) system on a 30 m aperture telescope, shows that the multi time-step reconstruction increases the Strehl ratio (SR) over a scientific FoV of 10 arcminutes in diameter by a factor of 1.5--1.8 when compared to the classical tomographic reconstructor, depending on the guide star asterism and with perfect knowledge of wind speeds and directions. We also evaluate the multi time-step reconstruction method and the wind estimation meth...
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
An implicit time-stepping scheme for rigid body dynamics with Coulomb friction
Energy Technology Data Exchange (ETDEWEB)
STEWART,DAVID; TRINKLE,JEFFREY C.
2000-02-15
In this paper a new time-stepping method for simulating systems of rigid bodies is given. Unlike methods which take an instantaneous point of view, the method is based on impulse-momentum equations, and so does not need to explicitly resolve impulsive forces. On the other hand, the method is distinct from previous impulsive methods in that it does not require explicit collision checking and it can handle simultaneous impacts. Numerical results are given for one planar and one three-dimensional example, which demonstrate the practicality of the method, and its convergence as the step size becomes small.
PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation
Energy Technology Data Exchange (ETDEWEB)
Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-05-01
A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings
Toggweiler, Matthias; Arbenz, Peter; Yang, Jianjun J
2012-01-01
We show that adaptive time stepping in particle accelerator simulation is an enhancement for certain problems. The new algorithm has been implemented in the OPAL (Object Oriented Parallel Accelerator Library) framework, and is compared to the existing code. The idea is to adjust the frequency of costly self field calculations, which are needed to model Coulomb interaction (space charge) effects. In analogy to a Kepler orbit simulation that requires a higher time step resolution at the close encounter, we propose to choose the time step based on the magnitude of the space charge forces. Inspired by geometric integration techniques, our algorithm chooses the time step proportional to a function of the current phase space state instead of calculating a local error estimate like a conventional adaptive procedure. In this paper we build up on first observations made in recent work. A more profound argument is given on how exactly the time step should be chosen. An intermediate algorithm, initially built to allow a...
Low-resolution density maps from atomic models: how stepping "back" can be a step "forward".
Belnap, D M; Kumar, A; Folk, J T; Smith, T J; Baker, T S
1999-01-01
Atomic-resolution structures have had a tremendous impact on modern biological science. Much useful information also has been gleaned by merging and correlating atomic-resolution structural details with lower-resolution (15-40 A), three-dimensional (3D) reconstructions computed from images recorded with cryo-transmission electron microscopy (cryoTEM) procedures. One way to merge these structures involves reducing the resolution of an atomic model to a level comparable to a cryoTEM reconstruction. A low-resolution density map can be derived from an atomic-resolution structure by retrieving a set of atomic coordinates editing the coordinate file, computing structure factors from the model coordinates, and computing the inverse Fourier transform of the structure factors. This method is a useful tool for structural studies primarily in combination with 3D cryoTEM reconstructions. It has been used to assess the quality of 3D reconstructions, to determine corrections for the phase-contrast transfer function of the transmission electron microscope, to calibrate the dimensions and handedness of 3D reconstructions, to produce difference maps, to model features in macromolecules or macromolecular complexes, and to generate models to initiate model-based determination of particle orientation and origin parameters for 3D reconstruction.
Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network
Institute of Scientific and Technical Information of China (English)
Ma Qian-Li; Zheng Qi-Lun; Peng Hong; Zhong Tan-Wei; Qin Jiang-Wei
2008-01-01
This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series,it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy.The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure.It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence.The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets:the Lorenz series,Mackey-Glass series and real-world sun spot series.The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series.
Shock-Timing Experiment Using a Two-Step Radiation Pulse with a Polystyrene Target
Institute of Scientific and Technical Information of China (English)
WANG Feng; PENG Xiao-Shi; JIAO Chun-Ye; LIU Shen-Ye; JIANG Xiao-Hua; DING Yong-Kun
2011-01-01
@@ A shock-timing experiment plays an important role in inertial confinement fusion studies, and the timing of multiple shock waves is crucial to the performance of inertial confinement fusion ignition targets.We present an experimental observation of a shock wave driven by a two-step radiation pulse in a polystyrene target.The experiment is carried out at Shen Guang 11 Yuan Xing (SGNYX) laser facility in China, and the generation and coalescence of the two shock waves, originating from each of the two radiation steps, is clearly seen with two velocity interferometers.This two-shock-wave coalescence is also simulated by the radioactive hydrodynamic code of a multi-1D program.The experimental measurements are compared with the simulations and quite good agreements are found, with relatively small discrepancies in shock timing.
A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling
2015-01-01
In estuaries most of the sediment load is carried in suspension. Sediment dynamics differ depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. Suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. A robust sediment model is the first step towards a chain of model including contaminants and phytoplankton dy...
Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems
Directory of Open Access Journals (Sweden)
H. Vincent Poor
2008-05-01
Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.
Directory of Open Access Journals (Sweden)
Jian Ding
2013-11-01
Full Text Available This paper is concerned with the linear unbiased minimum variance estimation problem for discrete-time stochastic linear control systems with one-step random delay and inconsecutive packet dropout. A new model is developed to describe the phenomena of the one-step delay and inconsecutive packet dropout by employing a Bernoulli distributed stochastic variable. Based on the model, a recursive linear unbiased optimal filter in the linear minimum variance sense is designed by the method of completing the square. The solution to the linear filter is given by three equations including a Riccati equation, a Lyapunov equation and a simple difference equation. A sufficient condition for the existence of the steady-state filter is given. A simulation shows the effectiveness of the proposed algorithm.
Block Time Step Storage Scheme for Astrophysical N-body Simulations
Cai, Maxwell Xu; Kouwenhoven, M B N; Assmann, Paulina; Spurzem, Rainer
2015-01-01
Astrophysical research in recent decades has made significant progress thanks to the availability of various $N$-body simulation techniques. With the rapid development of high-performance computing technologies, modern simulations have been able to take the computing power of massively parallel clusters with more than $10^5$ GPU cores. While unprecedented accuracy and dynamical scales have been achieved, the enormous amount of data being generated continuously poses great challenges for the subsequent procedures of data analysis and archiving. As an urgent response to these challenges, in this paper we propose an adaptive storage scheme for simulation data, inspired by the block time step integration scheme found in a number of direct $N$-body integrators available nowadays. The proposed scheme, namely the block time step storage scheme, works by minimizing the data redundancy with assignments of data with individual output frequencies as required by the researcher. As demonstrated by benchmarks, the proposed...
Error correction in short time steps during the application of quantum gates
Energy Technology Data Exchange (ETDEWEB)
Castro, L.A. de, E-mail: leonardo.castro@usp.br; Napolitano, R.D.J.
2016-04-15
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.
Inference Based on Simple Step Statistics for the Location Model.
1981-07-01
function. Let TN,k(9) - Zak(’)Vi(e). Then TNk is called the k-step statistic. Noether (1973) studied the 1-step statistic with particular emphasis on...opposed to the sign statistic. These latter two comparisons were first discussed by Noether (1973) in a somewhat different setting. Notice that the...obtained by Noether (1973). If k - 3, we seek the (C + 1)’st and (2N - bI - b2 - C)’th ordered Walsh averages in D The algorithm of Section 3 modified to
Energy Technology Data Exchange (ETDEWEB)
Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)
2012-09-15
ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.
Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît
2016-04-12
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.
An Improved Split-Step Wavelet Transform Method for Anomalous Radio Wave Propagation Modelling
Directory of Open Access Journals (Sweden)
A. Iqbal
2014-12-01
Full Text Available Anomalous tropospheric propagation caused by ducting phenomenon is a major problem in wireless communication. Thus, it is important to study the behavior of radio wave propagation in tropospheric ducts. The Parabolic Wave Equation (PWE method is considered most reliable to model anomalous radio wave propagation. In this work, an improved Split Step Wavelet transform Method (SSWM is presented to solve PWE for the modeling of tropospheric propagation over finite and infinite conductive surfaces. A large number of numerical experiments are carried out to validate the performance of the proposed algorithm. Developed algorithm is compared with previously published techniques; Wavelet Galerkin Method (WGM and Split-Step Fourier transform Method (SSFM. A very good agreement is found between SSWM and published techniques. It is also observed that the proposed algorithm is about 18 times faster than WGM and provide more details of propagation effects as compared to SSFM.
Frequency-Speed Control Model Identification of Ultrasonic Motor Using Step Response
Institute of Scientific and Technical Information of China (English)
Shi Jingzhuo; Zhang Caixia
2015-01-01
Control model of ultrasonic motor is the foundation for high control performance .The frequency of driv-ing voltage is commonly used as control variable in the speed control system of ultrasonic motor .Speed control model with the input frequency can significantly improve speed control performance .Step response of rotating speed is tested .Then ,the transfer function model is identified through characteristic point method .Considering time-varying characteristics of the model parameters ,the variables are fitted with frequency and speed as the inde-pendent variables ,and the variable model of ultrasonic motor system is obtained ,with consideration of the nonlin-earity of ultrasonic motor system .The proposed model can be used in the design and analysis of the speed control system in ultrasonic motor .
Real-Time, Single-Step Bioassay Using Nanoplasmonic Resonator With Ultra-High Sensitivity
Zhang, Xiang (Inventor); Ellman, Jonathan A. (Inventor); Chen, Fanqing Frank (Inventor); Su, Kai-Hang (Inventor); Wei, Qi-Huo (Inventor); Sun, Cheng (Inventor)
2014-01-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng
2014-04-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Gonoskov, Ivan; Marklund, Mattias
2016-05-01
We propose and develop a general method of numerical calculation of the wave function time evolution in a quantum system which is described by Hamiltonian of an arbitrary dimensionality and with arbitrary interactions. For this, we obtain a general n-order single-step propagator in closed-form, which could be used for the numerical solving of the problem with any prescribed accuracy. We demonstrate the applicability of the proposed approach by considering a quantum problem with non-separable time-dependent Hamiltonian: the propagation of an electron in focused electromagnetic field with vortex electric field component.
Analytical model of LDMOS with a double step buried oxide layer
Yuan, Song; Duan, Baoxing; Cao, Zhen; Guo, Haijun; Yang, Yintang
2016-09-01
In this paper, a two-dimensional analytical model is established for the Buried Oxide Double Step Silicon On Insulator structure proposed by the authors. Based on the two-dimensional Poisson equation, the analytic expressions of the surface electric field and potential distributions for the device are achieved. In the BODS (Buried Oxide Double Step Silicon On Insulator) structure, the buried oxide layer thickness changes stepwise along the drift region, and the positive charge in the drift region can be accumulated at the corner of the step. These accumulated charge function as the space charge in the depleted drift region. At the same time, the electric field in the oxide layer also varies with the different drift region thickness. These variations especially the accumulated charge will modulate the surface electric field distribution through the electric field modulation effects, which makes the surface electric field distribution more uniform. As a result, the breakdown voltage of the device is improved by 30% compared with the conventional SOI structure. To verify the accuracy of the analytical model, the device simulation software ISE TCAD is utilized, the analytical values are in good agreement with the simulation results by the simulation software. That means the established two-dimensional analytical model for BODS structure is valid, and it also illustrates the breakdown voltage enhancement by the electric field modulation effect sufficiently. The established analytical models will provide the physical and mathematical basis for further analysis of the new power devices with the patterned buried oxide layer.
Effect of Velocity and Time-Step on the Continuity of a Discrete Moving Sound Image
Directory of Open Access Journals (Sweden)
Yoshikazu Seki
2014-01-01
Full Text Available As a basic study into 3-D audio display systems, this paper reports the conditions of moving sound image velocity and time-step where a discrete moving sound image is perceived as continuous motion. In this study, the discrete moving sound image was presented through headphones and ran along the ear-axis. The experiments tested the continuity of a discrete moving sound image using various conditions of velocity (0.25, 0.5, 0.75, 1, 2, 3, and 4 m/s and time-step (0, 0.02, 0.04, 0.06, 0.08, 0.10, 0.12, and 0.14 s. As a result, the following were required in order to present the discrete moving sound image as continuous movement. (1 The 3-D audio display system was required to complete the sound image presentation process, including head tracking and HRTF simulation, in a time shorter than 0.02 s, in order to present sound image movement at all velocities. (2 A processing time longer than 0.1 s was not acceptable. (3 If the 3-D audio display system only presented very slow movement (less than about 0.5 m/s, processing times ranging from 0.04 s to 0.06 s were still acceptable.
Directory of Open Access Journals (Sweden)
Xuesong FENG, Ph.D Candidate
2009-01-01
Full Text Available It is expected that improvement of transport networks could give rise to the change of spatial distributions of population-related factors and car ownership, which are expected to further influence travel demand. To properly reflect such an interdependence mechanism, an aggregate multinomial logit (A-MNL model was firstly applied to represent the spatial distributions of these exogenous variables of the travel demand model by reflecting the influence of transport networks. Next, the spatial autocorrelation analysis is introduced into the log-transformed A-MNL model (called SPA-MNL model. Thereafter, the SPA-MNL model is integrated into the four-step travel demand model with feedback (called 4-STEP model. As a result, an integrated travel demand model is newly developed and named as the SPA-STEP model. Using person trip data collected in Beijing, the performance of the SPA-STEP model is empirically compared with the 4-STEP model. It was proven that the SPA-STEP model is superior to the 4-STEP model in accuracy; most of the estimated parameters showed statistical differences in values. Moreover, though the results of the simulations to the same set of assumed scenarios by the 4-STEP model and the SPA-STEP model consistently suggested the same sustainable path for the future development of Beijing, it was found that the environmental sustainability and the traffic congestion for these scenarios were generally overestimated by the 4-STEP model compared with the corresponding analyses by the SPA-STEP model. Such differences were clearly generated by the introduction of the new modeling step with spatial autocorrelation.
Institute of Scientific and Technical Information of China (English)
Dong Wang; Steven J. Ruuth
2008-01-01
Implicit-explicit (IMEX) linear multistep methods are popular techniques for solving partial differential equations (PDEs) with terms of different types. While fixed time-step versions of such schemes have been developed and studied, implicit-explicit schemes also naturally arise in general situations where the temporal smoothness of the solution changes. In this paper we consider easily implementable variable step-size implicit-explicit (VSIMEX) linear multistep methods for time-dependent PDEs. Families of order-p, p-step VSIMEX schemes are constructed and analyzed, where p ranges from 1 to 4. The corresponding schemes are simple to implement and have the property that they reduce to the classical IMEX schemes whenever constant time step-sizes are imposed. The methods are validated on the Burgers' equation. These results demonstrate that by varying the time step-size, VSIMEX methods can outperform their fixed time step counterparts while still maintaining good numerical behavior.
Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)
Pestana, Reynam C.
2009-01-01
We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.
Guo, Peng; Cheng, Wenming; Wang, Yi
2015-11-01
This article considers the parallel machine scheduling problem with step-deteriorating jobs and sequence-dependent setup times. The objective is to minimize the total tardiness by determining the allocation and sequence of jobs on identical parallel machines. In this problem, the processing time of each job is a step function dependent upon its starting time. An individual extended time is penalized when the starting time of a job is later than a specific deterioration date. The possibility of deterioration of a job makes the parallel machine scheduling problem more challenging than ordinary ones. A mixed integer programming model for the optimal solution is derived. Due to its NP-hard nature, a hybrid discrete cuckoo search algorithm is proposed to solve this problem. In order to generate a good initial swarm, a modified Biskup-Hermann-Gupta (BHG) heuristic called MBHG is incorporated into the population initialization. Several discrete operators are proposed in the random walk of Lévy flights and the crossover search. Moreover, a local search procedure based on variable neighbourhood descent is integrated into the algorithm as a hybrid strategy in order to improve the quality of elite solutions. Computational experiments are executed on two sets of randomly generated test instances. The results show that the proposed hybrid algorithm can yield better solutions in comparison with the commercial solver CPLEX® with a one hour time limit, the discrete cuckoo search algorithm and the existing variable neighbourhood search algorithm.
Feature Matching in Time Series Modelling
Xia, Yingcun
2011-01-01
Using a time series model to mimic an observed time series has a long history. However, with regard to this objective, conventional estimation methods for discrete-time dynamical models are frequently found to be wanting. In the absence of a true model, we prefer an alternative approach to conventional model fitting that typically involves one-step-ahead prediction errors. Our primary aim is to match the joint probability distribution of the observable time series, including long-term features of the dynamics that underpin the data, such as cycles, long memory and others, rather than short-term prediction. For want of a better name, we call this specific aim {\\it feature matching}. The challenges of model mis-specification, measurement errors and the scarcity of data are forever present in real time series modelling. In this paper, by synthesizing earlier attempts into an extended-likelihood, we develop a systematic approach to empirical time series analysis to address these challenges and to aim at achieving...
A one-step-ahead pseudo-DIC for comparison of Bayesian state-space models.
Millar, R B; McKechnie, S
2014-12-01
In the context of state-space modeling, conventional usage of the deviance information criterion (DIC) evaluates the ability of the model to predict an observation at time t given the underlying state at time t. Motivated by the failure of conventional DIC to clearly choose between competing multivariate nonlinear Bayesian state-space models for coho salmon population dynamics, and the computational challenge of alternatives, this work proposes a one-step-ahead DIC, DICp, where prediction is conditional on the state at the previous time point. Simulations revealed that DICp worked well for choosing between state-space models with different process or observation equations. In contrast, conventional DIC could be grossly misleading, with a strong preference for the wrong model. This can be explained by its failure to account for inflated estimates of process error arising from the model mis-specification. DICp is not based on a true conditional likelihood, but is shown to have interpretation as a pseudo-DIC in which the compensatory behavior of the inflated process errors is eliminated. It can be easily calculated using the DIC monitors within popular BUGS software when the process and observation equations are conjugate. The improved performance of DICp is demonstrated by application to the multi-stage modeling of coho salmon abundance in Lobster Creek, Oregon. © 2014, The International Biometric Society.
Models for dependent time series
Tunnicliffe Wilson, Granville; Haywood, John
2015-01-01
Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater
Solid-state three-step model for high-harmonic generation from periodic crystals
Ikemachi, Takuya; Sato, Takeshi; Yumoto, Junji; Kuwata-Gonokami, Makoto; Ishikawa, Kenichi L
2016-01-01
We study high-harmonic generation (HHG) from solids driven by intense laser pulses using the time-dependent Schrodinger equation for a one-dimensional model periodic crystal. Based on the simulation results, we propose a simple model that can quantitatively explain many aspects of solid- state HHG, some of which have been experimentally observed. Incorporating interband tunneling, intraband acceleration, and recombination with the valence-band hole, our model can be viewed as a solid-state counterpart of the familiar three-step model highly successful for gas-phase HHG and provides a unified basis to understand HHG from gaseous media and solid-state materials. The solid-state three-step model describes how, by repeating intraband acceleration and interband tunneling, electrons climb up across multiple conduction bands. The key parameter in predicting the HHG spectrum structure from the band-climbing process is the peak-to-valley (or valley-to-peak) full amplitude of the pulse vector potential $A(t)$. When the...
Detection of Zika virus by SYBR green one-step real-time RT-PCR.
Xu, Ming-Yue; Liu, Si-Qing; Deng, Cheng-Lin; Zhang, Qiu-Yan; Zhang, Bo
2016-10-01
The ongoing Zika virus (ZIKV) outbreak has rapidly spread to new areas of Americas, which were the first transmissions outside its traditional endemic areas in Africa and Asia. Due to the link with newborn defects and neurological disorder, numerous infected cases throughout the world and various mosquito vectors, the virus has been considered to be an international public health emergency. In the present study, we developed a SYBR Green based one-step real-time RT-PCR assay for rapid detection of ZIKV. Our results revealed that the real-time assay is highly specific and sensitive in detection of ZIKV in cell samples. Importantly, the replication of ZIKV at different time points in infected cells could be rapidly monitored by the real-time RT-PCR assay. Specifically, the real-time RT-PCR showed acceptable performance in measurement of infectious ZIKV RNA. This assay could detect ZIKV at a titer as low as 1PFU/mL. The real-time RT-PCR assay could be a useful tool for further virology surveillance and diagnosis of ZIKV.
TMS field modelling-status and next steps
DEFF Research Database (Denmark)
Thielscher, Axel
2013-01-01
In the recent years, an increasing number of studies used geometrically accurate head models and finite element (FEM) or finite difference methods (FDM) to estimate the electric field induced by non-invasive neurostimulation techniques such as transcranial magnetic stimulation (TMS) or transcranial......, field estimates based on accurate head models have already proven highly useful for a better understanding of the biophysics of non-invasive brain stimulation. The improved software tools now allow for systematic tests of the links between the estimated fields and the physiological effects in multi...... weak current stimulation (tCS; e.g., Datta et al., 2010; Thielscher et al., 2011). A general outcome was that the field estimates based on these more realistic models differ substantially from the results obtained with simpler head models. This suggests that the former models are indeed needed...
Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2008-01-07
Models involving stochastic differential equations (SDEs) play a prominent role in a wide range of applications where systems are not at the thermodynamic limit, for example, biological population dynamics. Therefore there is a need for numerical schemes that are capable of accurately and efficiently integrating systems of SDEs. In this work we introduce a variable size step algorithm and apply it to systems of stiff SDEs with multiple multiplicative noise. The algorithm is validated using a subclass of SDEs called chemical Langevin equations that appear in the description of dilute chemical kinetics models, with important applications mainly in biology. Three representative examples are used to test and report on the behavior of the proposed scheme. We demonstrate the advantages and disadvantages over fixed time step integration schemes of the proposed method, showing that the adaptive time step method is considerably more stable than fixed step methods with no excessive additional computational overhead.
A single-step genomic model with direct estimation of marker effects.
Liu, Z; Goddard, M E; Reinhardt, F; Reents, R
2014-09-01
Compared with the currently widely used multi-step genomic models for genomic evaluation, single-step genomic models can provide more accurate genomic evaluation by jointly analyzing phenotypes and genotypes of all animals and can properly correct for the effect of genomic preselection on genetic evaluations. The objectives of this study were to introduce a single-step genomic model, allowing a direct estimation of single nucleotide polymorphism (SNP) effects, and to develop efficient computing algorithms for solving equations of the single-step SNP model. We proposed an alternative to the current single-step genomic model based on the genomic relationship matrix by including an additional step for estimating the effects of SNP markers. Our single-step SNP model allowed flexible modeling of SNP effects in terms of the number and variance of SNP markers. Moreover, our single-step SNP model included a residual polygenic effect with trait-specific variance for reducing inflation in genomic prediction. A kernel calculation of the SNP model involved repeated multiplications of the inverse of the pedigree relationship matrix of genotyped animals with a vector, for which numerical methods such as preconditioned conjugate gradients can be used. For estimating SNP effects, a special updating algorithm was proposed to separate residual polygenic effects from the SNP effects. We extended our single-step SNP model to general multiple-trait cases. By taking advantage of a block-diagonal (co)variance matrix of SNP effects, we showed how to estimate multivariate SNP effects in an efficient way. A general prediction formula was derived for candidates without phenotypes, which can be used for frequent, interim genomic evaluations without running the whole genomic evaluation process. We discussed various issues related to implementation of the single-step SNP model in Holstein populations with an across-country genomic reference population.
Energy Technology Data Exchange (ETDEWEB)
Omelyan, Igor, E-mail: omelyan@ualberta.ca, E-mail: omelyan@icmp.lviv.ua [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta T6G 2M9 (Canada); Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G 2G8 (Canada); Institute for Condensed Matter Physics, National Academy of Sciences of Ukraine, 1 Svientsitskii Street, Lviv 79011 (Ukraine); Kovalenko, Andriy, E-mail: andriy.kovalenko@nrc-cnrc.gc.ca [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta T6G 2M9 (Canada); Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G 2G8 (Canada)
2013-12-28
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics
Kraft, Timothy W.
2016-01-01
Purpose To examine the predictions of alternative models for the stochastic shut-off of activated rhodopsin (R*) and their implications for the interpretation of experimentally recorded single-photon responses (SPRs) in mammalian rods. Theory We analyze the transitions that an activated R* molecule undergoes as a result of successive phosphorylation steps and arrestin binding. We consider certain simplifying cases for the relative magnitudes of the reaction rate constants and derive the probability distributions for the time to arrestin binding. In addition to the conventional model in which R* catalytic activity declines in a graded manner with successive phosphorylations, we analyze two cases in which the activity is assumed to occur not via multiple small steps upon each phosphorylation but via a single large step. We refer to these latter two cases as the binary R* shut-off and three-state R* shut-off models. Methods We simulate R*’s stochastic reactions numerically for the three models. In the simplifying cases for the ratio of rate constants in the binary and three-state models, we show that the probability distribution of the time to arrestin binding is accurately predicted. To simulate SPRs, we then integrate the differential equations for the downstream reactions using a standard model of the rod outer segment that includes longitudinal diffusion of cGMP and Ca2+. Results Our simulations of SPRs in the conventional model of graded shut-off of R* conform closely to the simulations in a recent study. However, the gain factor required to account for the observed mean SPR amplitude is higher than can be accounted for from biochemical experiments. In addition, a substantial minority of the simulated SPRs exhibit features that have not been reported in published experiments. Our simulations of SPRs using the model of binary R* shut-off appear to conform closely to experimental results for wild type (WT) mouse rods, and the required gain factor conforms to
Modelling the impacts of an invasive species across landscapes: a step-wise approach
Directory of Open Access Journals (Sweden)
Darren Ward
2014-06-01
Full Text Available We estimate the extent of ecological impacts of the invasive Asian paper wasp across different landscapes in New Zealand. We used: (i a baseline distribution layer (modelled via MaxEnt; (ii Asian paper wasp nest density (from >460 field plots, related to their preferences for specific land cover categories; and (iii and their foraging intensity (rates of foraging success, and the time available to forage on a seasonal basis. Using geographic information systems this information is combined and modelled across different landscapes in New Zealand in a step-wise selection process. The highest densities of Asian paper wasps were in herbaceous saline vegetation, followed closely by built-up areas, and then scrub and shrubland. Nest densities of 34 per ha, and occupancy rates of 0.27 were recorded for herbaceous saline vegetation habitats. However, the extent of impacts of the Asian paper wasp remains relatively restricted because of narrow climate tolerances and spatial restriction of preferred habitats. A step-wise process based on geographic information systems and species distribution models, in combination with factors such as distribution, density, and predation, create a useful tool that allows the extent of impacts of invasive species to be assessed across large spatial scales. These models will be useful for conservation managers as they provide easy visual interpretation of results, and can help prioritise where direct conservation action or control of the invader are required.
Modelling the impacts of an invasive species across landscapes: a step-wise approach.
Ward, Darren; Morgan, Fraser
2014-01-01
We estimate the extent of ecological impacts of the invasive Asian paper wasp across different landscapes in New Zealand. We used: (i) a baseline distribution layer (modelled via MaxEnt); (ii) Asian paper wasp nest density (from >460 field plots, related to their preferences for specific land cover categories); and (iii) and their foraging intensity (rates of foraging success, and the time available to forage on a seasonal basis). Using geographic information systems this information is combined and modelled across different landscapes in New Zealand in a step-wise selection process. The highest densities of Asian paper wasps were in herbaceous saline vegetation, followed closely by built-up areas, and then scrub and shrubland. Nest densities of 34 per ha, and occupancy rates of 0.27 were recorded for herbaceous saline vegetation habitats. However, the extent of impacts of the Asian paper wasp remains relatively restricted because of narrow climate tolerances and spatial restriction of preferred habitats. A step-wise process based on geographic information systems and species distribution models, in combination with factors such as distribution, density, and predation, create a useful tool that allows the extent of impacts of invasive species to be assessed across large spatial scales. These models will be useful for conservation managers as they provide easy visual interpretation of results, and can help prioritise where direct conservation action or control of the invader are required.
Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations
Directory of Open Access Journals (Sweden)
H. Nakajima
2006-05-01
Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore pressure were measured during the tests at multiple gravity levels. Based on the scaling law of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40 g, the optimized unsaturated parameters compared well when accurate pore pressure measurements were included along with cumulative outflow as input data. The centrifuge modeling technique with its capability to implement variety of instrumentations under well controlled initial and boundary conditions, shortens testing time and can provide significant information for the parameter estimation procedure.
Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations
Directory of Open Access Journals (Sweden)
H. Nakajima
2006-01-01
Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.
Directory of Open Access Journals (Sweden)
Zongqi Liang
2009-01-01
Full Text Available We analyze a class of large time-stepping Fourier spectral methods for the semiclassical limit of the defocusing Nonlinear Schrödinger equation and provide highly stable methods which allow much larger time step than for a standard implicit-explicit approach. An extra term, which is consistent with the order of the time discretization, is added to stabilize the numerical schemes. Meanwhile, the first-order and second-order semi-implicit schemes are constructed and analyzed. Finally the numerical experiments are performed to demonstrate the effectiveness of the large time-stepping approaches.
[Photodissociation of Acetylene and Acetone using Step-Scan Time-Resolved FTIR Emission Spectroscopy
McLaren, Ian A.; Wrobel, Jacek D.
1997-01-01
The photodissociation of acetylene and acetone was investigated as a function of added quenching gas pressures using step-scan time-resolved FTIR emission spectroscopy. Its main components consist of Bruker IFS88, step-scan Fourier Transform Infrared (FTIR) spectrometer coupled to a flow cell equipped with Welsh collection optics. Vibrationally excited C2H radicals were produced from the photodissociation of acetylene in the unfocused experiments. The infrared (IR) emission from these excited C2H radicals was investigated as a function of added argon pressure. Argon quenching rate constants for all C2H emission bands are of the order of 10(exp -13)cc/molecule.sec. Quenching of these radicals by acetylene is efficient, with a rate constant in the range of 10(exp -11) cc/molecule.sec. The relative intensity of the different C2H emission bands did not change with the increasing argon or acetylene pressure. However, the overall IR emission intensity decreased, for example, by more than 50% when the argon partial pressure was raised from 0.2 to 2 Torr at fixed precursor pressure of 160mTorr. These observations provide evidence for the formation of a metastable C2H2 species, which are collisionally quenched by argon or acetylene. Problems encountered in the course of the experimental work are also described.
Energy Technology Data Exchange (ETDEWEB)
Toggweiler, Matthias, E-mail: rmf7@m4t.ch [ETH Zürich, Computer Science Department, Universitätsstrasse 6, 8092 Zürich (Switzerland); Paul Scherrer Institute, CH-5234 Villigen (Switzerland); MIT, Department of Physics, 77 Massachusetts Avenue, MA 02139 (United States); Adelmann, Andreas, E-mail: andreas.adelmann@psi.ch [Paul Scherrer Institute, CH-5234 Villigen (Switzerland); Arbenz, Peter, E-mail: arbenz@inf.ethz.ch [ETH Zürich, Computer Science Department, Universitätsstrasse 6, 8092 Zürich (Switzerland); Yang, Jianjun, E-mail: jianjun.yang@psi.ch [Paul Scherrer Institute, CH-5234 Villigen (Switzerland); China Institute of Atomic Energy, Beijing, 102413 (China)
2014-09-15
We show that adaptive time stepping in particle accelerator simulation is an enhancement for certain problems. The new algorithm has been implemented in the OPAL (Object Oriented Parallel Accelerator Library) framework. The idea is to adjust the frequency of costly self-field calculations, which are needed to model Coulomb interaction (space charge) effects. In analogy to a Kepler orbit simulation that requires a higher time step resolution at the close encounter, we propose to choose the time step based on the magnitude of the space charge forces. Inspired by geometric integration techniques, our algorithm chooses the time step proportional to a function of the current phase space state instead of calculating a local error estimate like a conventional adaptive procedure. Building on recent work, a more profound argument is given on how exactly the time step should be chosen. An intermediate algorithm, initially built to allow a clearer analysis by introducing separate time steps for external field and self-field integration, turned out to be useful by its own, for a large class of problems.
The Two-Step Student Teaching Model: Training for Accountability.
Corlett, Donna
This model of student teaching preparation was developed in collaboration with public schools to focus on systematic experience in teaching and training for accountability in the classroom. In the two-semester plan, students begin with teacher orientation and planning days, serve as teacher aides, attend various methods courses, teach several…
Hsu, Ming-Chen
2010-02-01
The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.
Detection and Correction of Step Discontinuities in Kepler Flux Time Series
Kolodziejczak, J. J.; Morris, R. L.
2011-01-01
PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].
Solute transport modeling using morphological parameters of step-pool reaches
JiméNez, Mario A.; Wohl, Ellen
2013-03-01
Step-pool systems have been widely studied during the past few years, resulting in enhanced knowledge of mechanisms for sediment transport, energy dissipation and patterns of self-organization. We use rhodamine tracer data collected in nine step-pool reaches during high, intermediate and low flows to explore scaling of solute transport processes. Using the scaling patterns found, we propose an extension of the Aggregated Dead Zone (ADZ) approach for solute transport modeling based on the morphological features of step-pool units and their corresponding inherent variability within a stream reach. In addition to discharge, the reach-average bankfull width, mean step height, and the ratio of pool length to step-to-step length can be used as explanatory variables for the dispersion process within the studied reaches. These variables appeared to be sufficient for estimating ADZ model parameters and simulating solute transport in predictive mode for applications in reaches lacking tracer data.
JIMM: the next step for mission-level models
Gump, Jamieson; Kurker, Robert G.; Nalepka, Joseph P.
2001-09-01
The (Simulation Based Acquisition) SBA process is one in which the planning, design, and test of a weapon system or other product is done through the more effective use of modeling and simulation, information technology, and process improvement. This process results in a product that is produced faster, cheaper, and more reliably than its predecessors. Because the SBA process requires realistic and detailed simulation conditions, it was necessary to develop a simulation tool that would provide a simulation environment acceptable for doing SBA analysis. The Joint Integrated Mission Model (JIMM) was created to help define and meet the analysis, test and evaluation, and training requirements of a Department of Defense program utilizing SBA. Through its generic nature of representing simulation entities, its data analysis capability, and its robust configuration management process, JIMM can be used to support a wide range of simulation applications as both a constructive and a virtual simulation tool. JIMM is a Mission Level Model (MLM). A MLM is capable of evaluating the effectiveness and survivability of a composite force of air and space systems executing operational objectives in a specific scenario against an integrated air and space defense system. Because MLMs are useful for assessing a system's performance in a realistic, integrated, threat environment, they are key to implementing the SBA process. JIMM is a merger of the capabilities of one legacy model, the Suppressor MLM, into another, the Simulated Warfare Environment Generator (SWEG) MLM. By creating a more capable MLM, JIMM will not only be a tool to support the SBA initiative, but could also provide the framework for the next generation of MLMs.
Two-step memory within Continuous Time Random Walk. Description of double-action market dynamics
Gubiec, Tomasz
2013-01-01
By means of a novel version of the Continuous-Time Random Walk (CTRW) model with memory, we describe, for instance, the stochastic process of a single share price on a double-auction market within the high frequency time scale. The memory present in the model is understood as dependence between successive share price jumps, while waiting times between price changes are considered as i.i.d. random variables. The range of this memory is defined herein by dependence between three successive jumps of the process. This dependence is motivated both empirically, by analysis of empirical two-point histograms, and theoretically, by analysis of the bid-ask bounce mechanism containing some delay. Our model turns out to be analytically solvable, which enables us a direct comparison of its predictions with empirical counterparts, for instance, with so significant and commonly used quantity as velocity autocorrelation function. This work strongly extends the capabilities of the CTRW formalism.
Noise generated by model step lap core configurations of grain oriented electrical steel
Energy Technology Data Exchange (ETDEWEB)
Snell, David [Cogent Power Ltd., Development and Market Research, Orb Electrical Steels, Corporation Road, Newport, South Wales NP19 OXT (United Kingdom)], E-mail: Dave.snell@cogent-power.com
2008-10-15
Although it is important to reduce the power loss associated with transformer cores by use of electrical steel of the optimum grade, it is equally important to minimise the noise generated by the core. This paper discusses the effect of variations in the number of steps (3, 5, and 7) and the step overlap (2, 4, and 6 mm) on noise associated with model step lap cores of conventional, high permeability and ball unit domain refined high permeability grain oriented electrical steel. A-weighted sound pressure level noise measurements (LAeq) were made at various locations of the core over the frequency range 25-16,000 Hz. For all step lap cores investigated, the noise generated was dependent on the induction level, and on the number of steps and step overlap employed. The use of 3 step lap cores and step overlaps of 2 mm should be avoided, if low noise is to be achieved. There was very little difference between the noise emitted by the 5 and 7 step lap cores. Similar noise levels were noted for 27M0H material in the non-domain refined (NDR) and ball unit domain refined condition for a 5 step lap core with 6 mm step overlap.
Silver nanoparticle enhanced immunoassays: one step real time kinetic assay for insulin in serum.
Lochner, Nina; Lobmaier, Christina; Wirth, Michael; Leitner, Alfred; Pittner, Fritz; Gabor, Franz
2003-11-01
Silver nanoparticle enhanced fluorescence is introduced as an alternative method to surface plasmon resonance techniques for real time monitoring of biorecognitive interactions or immunoassays. This method relies on the phenomenon that an electromagnetic near field is generated upon illumination on the surface of silver nanoparticles. The interaction of this field with nearby fluorophores results in fluorescence enhancement. Thus, fluorophores in the bulk solution can be discriminated from surface bound fluorophores. Anti-insulin-antibodies were immobilized on the surface of silver colloids in the following order: A ready to use microplate was prepared by bottom up coating with layers of aminosilane, silver nanoparticles, Fc-recognizing F(ab)(2)-fragments and anti-insulin-antibodies. At equilibrium conditions fluorescein-labeled insulin could only be detected in the presence of the colloid; the detection limit was 250 nM, and a fourfold increase in fluorescence was observed upon real time monitoring. The competitive assay of labeled and unlabeled insulin revealed a working range of 10-200 nM insulin in serum. The rapid single step immunoassay is easy to perform even in microplate format, its sensitivity is comparable to ELISA techniques, and offers broad application for real time monitoring of molecular recognitive processes.
Albareda, Guillermo; Abedi, Ali; Tavernelli, Ivano; Rubio, Angel
2016-12-01
It was recently shown [G. Albareda et al., Phys. Rev. Lett. 113, 083003 (2014)], 10.1103/PhysRevLett.113.083003 that within the conditional decomposition approach to the coupled electron-nuclear dynamics, the electron-nuclear wave function can be exactly decomposed into an ensemble of nuclear wave packets effectively governed by nuclear conditional time-dependent potential-energy surfaces (C-TDPESs). Employing a one-dimensional model system, we show that for strong nonadiabatic couplings the nuclear C-TDPESs exhibit steps that bridge piecewise adiabatic Born-Oppenheimer potential-energy surfaces. The nature of these steps is identified as an effect of electron-nuclear correlation. Furthermore, a direct comparison with similar discontinuities recently reported in the context of the exact factorization framework allows us to draw conclusions about the universality of these discontinuities, viz., they are inherent to all nonadiabatic nuclear dynamics approaches based on (exact) time-dependent potential-energy surfaces.
Modeling printed circuit board curvature in relation to manufacturing process steps
Schuerink, G.A.; Slomp, M.; Wits, Wessel Willems; Legtenberg, R.; Legtenberg, R.; Kappel, E.A.
2013-01-01
This paper presents an analytical method to predict deformations of Printed Circuit Boards (PCBs) in relation to their manufacturing process steps. Classical Lamination Theory (CLT) is used as a basis. The model tracks internal stresses and includes the results of subsequent production steps, such a
A Semi-Empirical Two Step Carbon Corrosion Reaction Model in PEM Fuel Cells
Energy Technology Data Exchange (ETDEWEB)
Young, Alan; Colbow, Vesna; Harvey, David; Rogers, Erin; Wessel, Silvia
2013-01-01
The cathode CL of a polymer electrolyte membrane fuel cell (PEMFC) was exposed to high potentials, 1.0 to 1.4 V versus a reversible hydrogen electrode (RHE), that are typically encountered during start up/shut down operation. While both platinum dissolution and carbon corrosion occurred, the carbon corrosion effects were isolated and modeled. The presented model separates the carbon corrosion process into two reaction steps; (1) oxidation of the carbon surface to carbon-oxygen groups, and (2) further corrosion of the oxidized surface to carbon dioxide/monoxide. To oxidize and corrode the cathode catalyst carbon support, the CL was subjected to an accelerated stress test cycled the potential from 0.6 VRHE to an upper potential limit (UPL) ranging from 0.9 to 1.4 VRHE at varying dwell times. The reaction rate constants and specific capacitances of carbon and platinum were fitted by evaluating the double layer capacitance (Cdl) trends. Carbon surface oxidation increased the Cdl due to increased specific capacitance for carbon surfaces with carbon-oxygen groups, while the second corrosion reaction decreased the Cdl due to loss of the overall carbon surface area. The first oxidation step differed between carbon types, while both reaction rate constants were found to have a dependency on UPL, temperature, and gas relative humidity.
Directory of Open Access Journals (Sweden)
Gang Zhang
2017-07-01
Full Text Available The Sacramento model is widely utilized in hydrological forecast, of which the accuracy and performance are primarily determined by the model parameters, indicating the key role of parameter estimation. This paper presents a multi-step parameter estimation method, which divides the parameter estimation of Sacramento model into three steps and realizes optimization step by step. We firstly use the immune clonal selection algorithm (ICSA to solve the non-liner objective function of parameter estimation, and compare the parameter calibration result of ideal artificial data with Shuffled Complex Evolution (SCE-UA, Parallel Genetic Algorithm (PGA, and Serial Master-slaver Swarms Shuffling Evolution Algorithm Based on Particle Swarms Optimization (SMSE-PSO. The comparison result shows that ICSA has the best convergence, efficiency and precision. Then we apply ICSA to the parameter estimation of single-step and multi-step Sacramento model and simulate 32 floods based on application examples of Dongyang and Tantou river basins for validation. It is clearly shown that the results of multi-step method based on ICSA show higher accuracy and 100% qualified rate, indicating its higher precision and reliability, which has great potential to improve Sacramento model and hydrological forecast.
Two-dimensional modeling of stepped planing hulls with open and pressurized air cavities
Directory of Open Access Journals (Sweden)
Konstantin I. Matveev
2012-06-01
Full Text Available A method of hydrodynamic discrete sources is applied for two-dimensional modeling of stepped planing surfaces. The water surface deformations, wetted hull lengths, and pressure distribution are calculated at given hull attitude and Froude number. Pressurized air cavities that improve hydrodynamic performance can also be modeled with the current method. Presented results include validation examples, parametric calculations of a single-step hull, effect of trim tabs, and performance of an infinite series of periodic stepped surfaces. It is shown that transverse steps can lead to higher lift-drag ratio, although at reduced lift capability, in comparison with a stepless hull. Performance of a multi-step configuration is sensitive to the wave pattern between hulls, which depends on Froude number and relative hull spacing.
Single-step chemistry model and transport coefficient model for hydrogen combustion
Institute of Scientific and Technical Information of China (English)
WANG ChangJian; WEN Jennifer; LU ShouXiang; GUO Jin
2012-01-01
To satisfy the needs of large-scale hydrogen combustion and explosion simulation,a method is presented to establish single-step chemistry model and transport model for fuel-air mixture.If the reaction formula for hydrogen-air mixture is H2+0.5O2→H2O,the reaction rate model is ω =1.13×1015[H2][O2]exp(-46.37T0/T) mol (cm3 s)-1,and the transport coefficient model is μ=K/Cp=pD=7.0×10-5T 0.7 g (cm s)-1.By using current models and the reference model to simulate steady Zeldovich-von Neumann-Doering (ZND) wave and free-propagating laminar flame,it is found that the results are well agreeable.Additionally,deflagration-to-detonation transition in an obstructed channel was also simulated.The numerical results are also well consistent with the experimental results.These provide a reasonable proof for current method and new models.
Enriching step-based product information models to support product life-cycle activities
Sarigecili, Mehmet Ilteris
The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.
A Step Forward to Closing the Loop between Static and Dynamic Reservoir Modeling
Directory of Open Access Journals (Sweden)
Cancelliere M.
2014-12-01
Full Text Available The current trend for history matching is to find multiple calibrated models instead of a single set of model parameters that match the historical data. The advantage of several current workflows involving assisted history matching techniques, particularly those based on heuristic optimizers or direct search, is that they lead to a number of calibrated models that partially address the problem of the non-uniqueness of the solutions. The importance of achieving multiple solutions is that calibrated models can be used for a true quantification of the uncertainty affecting the production forecasts, which represent the basis for technical and economic risk analysis. In this paper, the importance of incorporating the geological uncertainties in a reservoir study is demonstrated. A workflow, which includes the analysis of the uncertainty associated with the facies distribution for a fluvial depositional environment in the calibration of the numerical dynamic models and, consequently, in the production forecast, is presented. The first step in the workflow was to generate a set of facies realizations starting from different conceptual models. After facies modeling, the petrophysical properties were assigned to the simulation domains. Then, each facies realization was calibrated separately by varying permeability and porosity fields. Data assimilation techniques were used to calibrate the models in a reasonable span of time. Results showed that even the adoption of a conceptual model for facies distribution clearly representative of the reservoir internal geometry might not guarantee reliable results in terms of production forecast. Furthermore, results also showed that realizations which seem fully acceptable after calibration were not representative of the true reservoir internal configuration and provided wrong production forecasts; conversely, realizations which did not show a good fit of the production data could reliably predict the reservoir
Adaptive statistic tracking control based on two-step neural networks with time delays.
Yi, Yang; Guo, Lei; Wang, Hong
2009-03-01
This paper presents a new type of control framework for dynamical stochastic systems, called statistic tracking control (STC). The system considered is general and non-Gaussian and the tracking objective is the statistical information of a given target probability density function (pdf), rather than a deterministic signal. The control aims at making the statistical information of the output pdfs to follow those of a target pdf. For such a control framework, a variable structure adaptive tracking control strategy is first established using two-step neural network models. Following the B-spline neural network approximation to the integrated performance function, the concerned problem is transferred into the tracking of given weights. The dynamic neural network (DNN) is employed to identify the unknown nonlinear dynamics between the control input and the weights related to the integrated function. To achieve the required control objective, an adaptive controller based on the proposed DNN is developed so as to track a reference trajectory. Stability analysis for both the identification and tracking errors is developed via the use of Lyapunov stability criterion. Simulations are given to demonstrate the efficiency of the proposed approach.
Application of a four-step HMX kinetic model to an impact-induced fraction ignition problems
Energy Technology Data Exchange (ETDEWEB)
Perry, William L [Los Alamos National Laboratory; Gunderson, Jake A [Los Alamos National Laboratory; Dickson, Peter M [Los Alamos National Laboratory
2010-01-01
There has been a long history of interest in the decomposition kinetics of HMX and HMX-based formulations due to the widespread use of this explosive in high performance systems. The kinetics allow us to predict, or attempt to predict, the behavior of the explosive when subjected to thermal hazard scenarios that lead to ignition via impact, spark, friction or external heat. The latter, commonly referred to as 'cook off', has been widely studied and contemporary kinetic and transport models accurately predict time and location of ignition for simple geometries. However, there has been relatively little attention given to the problem of localized ignition that results from the first three ignition sources of impact, spark and friction. The use of a zero-order single-rate expression describing the exothermic decomposition of explosives dates to the early work of Frank-Kamanetskii in the late 1930s and continued through the 60's and 70's. This expression provides very general qualitative insight, but cannot provide accurate spatial or timing details of slow cook off ignition. In the 70s, Catalano, et al., noted that single step kinetics would not accurately predict time to ignition in the one-dimensional time to explosion apparatus (ODTX). In the early 80s, Tarver and McGuire published their well-known three step kinetic expression that included an endothermic decomposition step. This scheme significantly improved the accuracy of ignition time prediction for the ODTX. However, the Tarver/McGuire model could not produce the internal temperature profiles observed in the small-scale radial experiments nor could it accurately predict the location of ignition. Those factors are suspected to significantly affect the post-ignition behavior and better models were needed. Brill, et al. noted that the enthalpy change due to the beta-delta crystal phase transition was similar to the assumed endothermic decomposition step in the Tarver/McGuire model. Henson, et
Stability of networked control systems with multi-step delay based on time-division algorithm
Institute of Scientific and Technical Information of China (English)
Changlin MA; Huajing FANG
2005-01-01
A new control mode is proposed for a networked control system whose network-induced delay is longer than a sampling period. A time-division algorithm is presented to implement the control and for the mathematical modeling of such networked control system. The infinite horizon controller is designed, which renders the networked control system mean square exponentially stable. Simulation results show the validity of the proposed theory.
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
Directory of Open Access Journals (Sweden)
Hepburn Iain
2012-05-01
Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates
Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios
2010-12-01
We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.
A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling
Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.
2015-01-01
In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.
A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling
Directory of Open Access Journals (Sweden)
F. M. Achete
2015-02-01
Full Text Available In estuaries most of the sediment load is carried in suspension. Sediment dynamics differ depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. Suspended sediment concentration (SSC is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. A robust sediment model is the first step towards a chain of model including contaminants and phytoplankton dynamics and habitat modeling. This works aims to determine turbidity levels in the complex-geometry Delta of San Francisco Estuary using a process-based approach (D-Flow Flexible Mesh software. Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters, the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year (Water Year 2011. Model results shows that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The current model may act as the base model for a chain of ecological models and climate scenario forecasting.
Sparse time series chain graphical models for reconstructing genetic networks
Abegaz, Fentaw; Wit, Ernst
2013-01-01
We propose a sparse high-dimensional time series chain graphical model for reconstructing genetic networks from gene expression data parametrized by a precision matrix and autoregressive coefficient matrix. We consider the time steps as blocks or chains. The proposed approach explores patterns of co
STEPS: modeling and simulating complex reaction-diffusion systems with Python
Directory of Open Access Journals (Sweden)
Stefan Wils
2009-06-01
Full Text Available We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code.
Timing analysis by model checking
Naydich, Dimitri; Guaspari, David
2000-01-01
The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.
One step at a time: how to toilet train children with learning disabilities.
Rogers, June
Toilet training children with learning disabilities can present challenges and requires careful assessmentand management. This article examines strategies for toilet training using a five step approach bladder and bowel control.
De Basabe, Jonás D.
2010-04-01
We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.
Chang, Fi-John; Chen, Pin-An; Lu, Ying-Ray; Huang, Eric; Chang, Kai-Yao
2014-09-01
Urban flood control is a crucial task, which commonly faces fast rising peak flows resulting from urbanization. To mitigate future flood damages, it is imperative to construct an on-line accurate model to forecast inundation levels during flood periods. The Yu-Cheng Pumping Station located in Taipei City of Taiwan is selected as the study area. Firstly, historical hydrologic data are fully explored by statistical techniques to identify the time span of rainfall affecting the rise of the water level in the floodwater storage pond (FSP) at the pumping station. Secondly, effective factors (rainfall stations) that significantly affect the FSP water level are extracted by the Gamma test (GT). Thirdly, one static artificial neural network (ANN) (backpropagation neural network-BPNN) and two dynamic ANNs (Elman neural network-Elman NN; nonlinear autoregressive network with exogenous inputs-NARX network) are used to construct multi-step-ahead FSP water level forecast models through two scenarios, in which scenario I adopts rainfall and FSP water level data as model inputs while scenario II adopts only rainfall data as model inputs. The results demonstrate that the GT can efficiently identify the effective rainfall stations as important inputs to the three ANNs; the recurrent connections from the output layer (NARX network) impose more effects on the output than those of the hidden layer (Elman NN) do; and the NARX network performs the best in real-time forecasting. The NARX network produces coefficients of efficiency within 0.9-0.7 (scenario I) and 0.7-0.5 (scenario II) in the testing stages for 10-60-min-ahead forecasts accordingly. This study suggests that the proposed NARX models can be valuable and beneficial to the government authority for urban flood control.
One-step electrodeposition process of CuInSe2: Deposition time effect
Indian Academy of Sciences (India)
O Meglali; N Attaf; A Bouraiou; M S Aida; S Lakehal
2014-10-01
CuInSe2 thin films were prepared by one-step electrodeposition process using a simplified twoelectrodes system. The films were deposited, during 5, 10, 15 and 20 min, from the deionized water solution consisting of CuCl2, InCl3 and SeO2 onto ITO-coated glass substrates. As-deposited films have been annealed under vacuum at 300 °C during 30 min. The structural, optical band gap and electrical resistivity of elaborated films were studied, respectively, using X-ray diffraction (XRD), Raman spectroscopy, UV spectrophotometer and four-point probe method. The micro structural parameters like lattice constants, crystallite size, dislocation density and strain have been evaluated. The XRD investigation proved that the film deposited at 20 min present CuInSe2 single phase in its chalcopyrite structure and with preferred orientation along (1 1 2) direction, whereas the films deposited at 5, 10 and 15 min show the CuInSe2 chalcopyrite structure with the In2Se3 as secondary phase. We have found that the formation mechanism of CuInSe2 depends on the In2Se3 phase. The optical band gap of the films is found to decrease from 1.17 to 1.04 eV with increase in deposition time. All films show Raman spectra with a dominant A1 mode at 174 cm-1, confirming the chalcopyrite crystalline quality of these films. The films exhibited a range of resistivity varying from 2.3 × 10-3 to 4.4 × 10-1 cm.
The Influence of Time Spent in Outdoor Play on Daily and Aerobic Step Count in Costa Rican Children
Morera Castro, Maria del Rocio
2011-01-01
The purpose of this study is to examine the influence of time spent in outdoor play (i.e., on weekday and weekend days) on daily (i.e., average step count) and aerobic step count (i.e., average moderate to vigorous physical activity [MVPA] during the weekdays and weekend days) in fifth grade Costa Rican children. It was hypothesized that: (a)…
Mucientes, A. E.; de la Pena, M. A.
2009-01-01
The concentration-time integrals method has been used to solve kinetic equations of parallel-consecutive first-order reactions with a reversible step. This method involves the determination of the area under the curve for the concentration of a given species against time. Computer techniques are used to integrate experimental curves and the method…
Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods
DEFF Research Database (Denmark)
Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin;
2013-01-01
Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from the product......Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... the production trait evaluation of Nordic Red dairy cattle. Genotyped bulls with daughters are used as training animals, and genotyped bulls and producing cows as candidate animals. For simplicity, size of the data is chosen so that the full inverses of the mixed model equation coefficient matrices can...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was...
Modelling of Attentional Dwell Time
DEFF Research Database (Denmark)
Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus
2009-01-01
into the temporal domain. In the neural interpretation of TVA (NTVA; Bundesen, Habekost and Kyllingsbæk, 2005), processing resources are implemented as allocation of cortical cells to objects in the visual field. A feedback mechanism is then used to keep encoded objects in VSTM alive. The proposed model...... of attentional dwell time extends these mechanisms by proposing that the processing resources (cells) already engaged in a feedback loop (i.e. allocated to an object) are locked in VSTM and therefore cannot be allocated to other objects in the visual field before the encoded object has been released...
Institute of Scientific and Technical Information of China (English)
李桂芬; 戈宝军; 李金香; 孙玉田; 焦晓霞; 梁彬
2014-01-01
为了更完善、更准确地研究系统扰动对大型发电机及电网的影响，建立了机–场–路–网耦合时步有限元仿真模型，实现了多段集中质量块弹簧机械轴系–电机二维瞬态电磁场–外电路–电网的全面耦合。模型不仅充分考虑了动态过程中不同时刻电机内电磁场畸变、饱和、涡流集肤效应及输变电系统参数等重要因素的影响，同时还进一步详细考虑了机械轴系的动态过程，包括各轴段传递的扭矩、阻尼转矩及各轴段转速的不同变化等因素的影响。为验证所建模型，对水力发电设备国家重点实验室内的动模实验系统搭建了仿真模型，并对机组120°误同期并网工况进行了仿真及实验对比分析，结果表明该机–场–路–网耦合仿真模型正确、可靠。%In order to accurately study the impact on large machine and power network of system disturbance, the mechanical shaft-field-circuit-network coupled time-step finite element model was set up. The comprehensive coupling of the multi-mass spring mechanical shaft system-2D transient electromagnetic field-circuit-power network was realized. In the model, the distortion of electromagnetic field, magnetic saturation, eddy current and parameters of power transmission system were taken into account in dynamic process, and dynamic behavior of mechanical shaft was considered in detail as well including the mechanical torque transmitted by every mass, damping torque and the speed variation of different mass. To validate the proposed model, the simulation and experimental results of faulty synchronization at 120º, performed on the dynamic simulation experimental system in State Key Laboratory of Hydropower Equipment, was compared and analyzed. The results show that the model is correct and effective.
Directory of Open Access Journals (Sweden)
Kępisty Grzegorz
2015-09-01
Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.
Self-administered treatment in stepped-care models of depression treatment.
Scogin, Forrest R; Hanson, Ashley; Welsh, Douglas
2003-03-01
Stepped behavioral health care models have begun to receive increased attention. Self-administered treatments deserve consideration as an element in these models for some disorders and for some consumers. Features suggesting inclusion include low cost, wide availability, and evidence-based status. We present a stepped-care model for depression inclusive of a self-administered treatment component. We also discuss cautions such as depression severity and consumer preference. Evaluation of the efficacy and cost effectiveness of this approach to depression treatment is necessary. Copyright 2003 Wiley Periodicals, Inc. J Clin Psychol 59: 341-349, 2003.
Kelly, John F.; Brown, Sandra A.; Abrantes, Ana; Kahler, Christopher; Myers, Mark
2013-01-01
Background Despite widespread use of 12-step treatment approaches and referrals to Alcoholics Anonymous (AA) and Narcotics Anonymous (NA) by youth providers, little is known about the significance of these organizations in youth addiction recovery. Furthermore, existing evidence is based mostly on short-term follow-up and is limited methodologically. Methods Adolescent inpatients (N = 160; M age = 16, 40% female) were followed at 6-months, and at 1, 2, 4, 6, and 8 years post-treatment. Time-lagged, generalized estimating equations (GEE) modeled treatment outcome in relation to AA/NA attendance controlling for static and time-varying covariates. Robust regression (LOWESS) explored dose-response thresholds of AA/NA attendance on outcome. Results AA/NA attendance was common and intensive early post-treatment, but declined sharply and steadily over the 8-year period. Patients with greater addiction severity and those who believed they could not use substances in moderation were more likely to attend. Despite declining attendance, the effects related to AA/NA remained significant and consistent. Greater early participation was associated with better long-term outcomes. Conclusions Even though many youth discontinue AA/NA over time, attendees appear to benefit, and more severely substance-involved youth attend most. Successful early post-treatment engagement of youth in abstinence-supportive social contexts, such as AA/NA, may have long-term implications for alcohol and drug involvement into young adulthood. PMID:18557829
Directory of Open Access Journals (Sweden)
Conor Lawless
Full Text Available Increases in cellular Reactive Oxygen Species (ROS concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells.One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage. However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS. We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence.
Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters
DEFF Research Database (Denmark)
Nystrup, Peter; Madsen, Henrik; Lindström, Erik
2016-01-01
estimation approach that allows for the parameters of the estimated models to be time varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared daily returns that was previously believed to be the most difficult fact...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....
Huang, Chung-Yuan
A new formulation of the stream function based on a stream function coordinate (SFC) concept for inviscid flow field calculations is presented. In addition, a new method is developed not only to accelerate, but also to stabilize the iterative schemes for steady and unsteady, linear and non-linear, scalar and system of coupled, partial differential equations. With this theory, the limitation on the time step size of an explicit scheme for solving unsteady problems and the limitation on the relaxation factors of an iterative scheme for solving steady state problems could be analytically determined. Moreover, this theory allows the determination of the optimal time steps for explicit time-stepping schemes and the optimal values of the acceleration factors for iterative schemes, if the transient behavior is immaterial.
Energy Technology Data Exchange (ETDEWEB)
Kapil, V.; Ceriotti, M., E-mail: michele.ceriotti@epfl.ch [Laboratory of Computational Science and Modelling, Institute of Materials, Ecole Polytechnique Fédérale de Lausanne, Lausanne (Switzerland); VandeVondele, J., E-mail: joost.vandevondele@mat.ethz.ch [Department of Materials, ETH Zurich, Wolfgang-Pauli-Strasse 27, CH-8093 Zurich (Switzerland)
2016-02-07
The development and implementation of increasingly accurate methods for electronic structure calculations mean that, for many atomistic simulation problems, treating light nuclei as classical particles is now one of the most serious approximations. Even though recent developments have significantly reduced the overhead for modeling the quantum nature of the nuclei, the cost is still prohibitive when combined with advanced electronic structure methods. Here we present how multiple time step integrators can be combined with ring-polymer contraction techniques (effectively, multiple time stepping in imaginary time) to reduce virtually to zero the overhead of modelling nuclear quantum effects, while describing inter-atomic forces at high levels of electronic structure theory. This is demonstrated for a combination of MP2 and semi-local DFT applied to the Zundel cation. The approach can be seamlessly combined with other methods to reduce the computational cost of path integral calculations, such as high-order factorizations of the Boltzmann operator or generalized Langevin equation thermostats.
Development of real-time diagnostics and feedback algorithms for JET in view of the next step
Energy Technology Data Exchange (ETDEWEB)
Murari, A [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, I-35127, Padua (Italy); Joffrin, E [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Felton, R [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Mazon, D [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Zabeo, L [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Albanese, R [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC, Loc. Feo di Vito, I-89060, RC (Italy); Arena, P [Assoc. Euratom-ENEA-CREATE, Univ. di Catania (Italy); Ambrosino, G [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico II, Via Claudio 21, I-80125 Naples (Italy); Ariola, M [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico II, Via Claudio 21, I-80125 Napoli (Italy); Barana, O [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, I-35127, Padua (Italy); Bruno, M [Assoc. Euratom-ENEA-CREATE, Univ. di Catania (Italy); Laborde, L [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Moreau, D [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Piccolo, F [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Sartori, F [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Crisanti, F [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E de la [Associacion EURATOM CIEMAT para Fusion, Avenida Complutense 22, E-28040 Madrid (Spain); Sanchez, J [Associacion EURATOM CIEMAT para Fusion, Avenida Complutense 22, E-28040 Madrid (Spain)
2005-03-01
Real-time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of next step tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real-time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. Some of the signals now routinely provided in real time at JET are: (i) the internal inductance and the main confinement quantities obtained by calculating the Shafranov integrals from the pick-up coils with 2 ms time resolution; (ii) the electron temperature profile, from electron cyclotron emission every 10 ms; (iii) the ion temperature and plasma toroidal velocity profiles, from charge exchange recombination spectroscopy, provided every 50 ms; and (iv) the safety factor profile, derived from the inversion of the polarimetric line integrals every 2 ms. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. With these new tools, several real-time schemes were implemented, among which the most significant is the simultaneous control of the safety factor and the plasma pressure profiles using the additional heating systems (LH, NBI, ICRH) as actuators. The control strategy adopted in this case consists of a multi-variable model-based technique, which was implemented as a truncated singular value decomposition of an integral operator. This approach is considered essential for systems like tokamak machines, characterized by a strong mutual dependence of the various parameters and the distributed nature of the quantities, the plasma profiles, to be controlled. First encouraging results were also obtained using non
One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values
Directory of Open Access Journals (Sweden)
Jin Xiao
2014-01-01
Full Text Available Scientific customer value segmentation (CVS is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM model. On the one hand, ODCEM integrates the preprocess of missing values and the classification modeling into one step; on the other hand, it utilizes multiple classifiers ensemble technology in constructing the classification models. The empirical results in credit scoring dataset “German” from UCI and the real customer churn prediction dataset “China churn” show that the ODCEM outperforms four commonly used “two-step” models and the ensemble based model LMF and can provide better decision support for market managers.
Yang, Zhijun; Cameron, Katherine; Lewinger, William; Webb, Barbara; Murray, Alan
2012-03-01
Animals such as stick insects can adaptively walk on complex terrains by dynamically adjusting their stepping motion patterns. Inspired by the coupled Matsuoka and resonate-and-fire neuron models, we present a nonlinear oscillation model as the neuromorphic central pattern generator (CPG) for rhythmic stepping pattern generation. This dynamic model can also be used to actuate the motoneurons on a leg joint with adjustable driving frequencies and duty cycles by changing a few of the model parameters while operating such that different stepping patterns can be generated. A novel mixed-signal integrated circuit design of this dynamic model is subsequently implemented, which, although simplified, shares the equivalent output performance in terms of the adjustable frequency and duty cycle. Three identical CPG models being used to drive three joints can make an arthropod leg of three degrees of freedom. With appropriate initial circuit parameter settings, and thus suitable phase lags among joints, the leg is expected to walk on a complex terrain with adaptive steps. The adaptation is associated with the circuit parameters mediated both by the higher level nervous system and the lower level sensory signals. The model is realized using a 0.3- complementary metal-oxide-semiconductor process and the results are reported.
Maltsev, Roman V
2013-01-01
A new approach to simulation of stationary flows by Direct Simulation Monte Carlo method is proposed. The idea is to specify an individual time step for each component of a gas mixture. The approach consists of modifications mainly to collision phase and recommendation on choosing time step ratios. It allows softening the demands on the computational resources for cases of disparate collision diameters of molecules and/or disparate molecular masses. These are the cases important in vacuum deposition technologies. Few tests of the new approach are made. Finally, the usage of new approach is demonstrated on a problem of silver nanocluster diffusion in carrier gas argon in conditions of silver deposition experiments.
Directory of Open Access Journals (Sweden)
Pengpeng Jiao
2014-11-01
Full Text Available Short-term prediction of dynamic turning movement proportions at intersections is very important for intelligent transportation systems, but it is impossible to detect turning flows directly through current traffic surveillance devices. Existing prediction models have proved to be rather accurate in general, but not precise enough during every time interval, and can only obtain the one-step prediction. This paper first presents a Bayesian combined model to forecast the entering and exiting flows at intersections, by integrating a nonlinear regression, a moving average, and an autoregressive model. Based on the forecasted traffic flows, this paper further develops an accurate backpropagation neural network model and an efficient Kalman filtering model to predict the dynamic turning movement proportions. Using Bayesian method with both historical information and currently prediction results for error adjustment, this paper finally integrates both the above two prediction models and proposes a Bi-Bayesian combined framework to achieve both one-step and two-step predictions. A case study is implemented based on practical survey data, which are collected at an intersection in Beijing city, including both historical and current data. The reported prediction results indicate that the Bi-Bayesian combined model is rather accurate and stable for on-line applications.
Island-dynamics model for mound formation: effect of a step-edge barrier.
Papac, Joe; Margetis, Dionisios; Gibou, Frederic; Ratsch, Christian
2014-08-01
We formulate and implement a generalized island-dynamics model of epitaxial growth based on the level-set technique to include the effect of an additional energy barrier for the attachment and detachment of atoms at step edges. For this purpose, we invoke a mixed, Robin-type, boundary condition for the flux of adsorbed atoms (adatoms) at each step edge. In addition, we provide an analytic expression for the requisite equilibrium adatom concentration at the island boundary. The only inputs are atomistic kinetic rates. We present a numerical scheme for solving the adatom diffusion equation with such a mixed boundary condition. Our simulation results demonstrate that mounds form when the step-edge barrier is included, and that these mounds steepen as the step-edge barrier increases.
A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions
Directory of Open Access Journals (Sweden)
Abdul Rahman Hafiz
2011-01-01
Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.
An Efficient Explicit-time Description Method for Timed Model Checking
Wang, Hao; 10.4204/EPTCS.14.6
2009-01-01
Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick) to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with p...
Richly parameterized linear models additive, time series, and spatial models using random effects
Hodges, James S
2013-01-01
A First Step toward a Unified Theory of Richly Parameterized Linear ModelsUsing mixed linear models to analyze data often leads to results that are mysterious, inconvenient, or wrong. Further compounding the problem, statisticians lack a cohesive resource to acquire a systematic, theory-based understanding of models with random effects.Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects takes a first step in developing a full theory of richly parameterized models, which would allow statisticians to better understand their analysis results. The aut
One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values
Jin Xiao; Bing Zhu; Geer Teng; Changzheng He; Dunhu Liu
2014-01-01
Scientific customer value segmentation (CVS) is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM) model. On the one hand, ODCEM integrates the preprocess of missing values and the classif...
Development of a three dimensional circulation model based on fractional step method
Abualtayef, Mazen; Kuroiwa, Masamitsu; Seif, Ahmed Khaled; Matsubara, Yuhei; Aly, Ahmed M.; Sayed, Ahmed A.; Sambe, Alioune Nar
2010-03-01
A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic.
Directory of Open Access Journals (Sweden)
Tandale Babasaheb V
2008-12-01
Full Text Available Abstract Background Chandipura virus (CHPV, a member of family Rhabdoviridae was attributed to an explosive outbreak of acute encephalitis in children in Andhra Pradesh, India in 2003 and a small outbreak among tribal children from Gujarat, Western India in 2004. The case-fatality rate ranged from 55–75%. Considering the rapid progression of the disease and high mortality, a highly sensitive method for quantifying CHPV RNA by real-time one step reverse transcriptase PCR (real-time one step RT-PCR using TaqMan technology was developed for rapid diagnosis. Methods Primers and probe for P gene were designed and used to standardize real-time one step RT-PCR assay for CHPV RNA quantitation. Standard RNA was prepared by PCR amplification, TA cloning and run off transcription. The optimized real-time one step RT-PCR assay was compared with the diagnostic nested RT-PCR and different virus isolation systems [in vivo (mice in ovo (eggs, in vitro (Vero E6, PS, RD and Sand fly cell line] for the detection of CHPV. Sensitivity and specificity of real-time one step RT-PCR assay was evaluated with diagnostic nested RT-PCR, which is considered as a gold standard. Results Real-time one step RT-PCR was optimized using in vitro transcribed (IVT RNA. Standard curve showed linear relationship for wide range of 102-1010 (r2 = 0.99 with maximum Coefficient of variation (CV = 5.91% for IVT RNA. The newly developed real-time RT-PCR was at par with nested RT-PCR in sensitivity and superior to cell lines and other living systems (embryonated eggs and infant mice used for the isolation of the virus. Detection limit of real-time one step RT-PCR and nested RT-PCR was found to be 1.2 × 100 PFU/ml. RD cells, sand fly cells, infant mice, and embryonated eggs showed almost equal sensitivity (1.2 × 102 PFU/ml. Vero and PS cell-lines (1.2 × 103 PFU/ml were least sensitive to CHPV infection. Specificity of the assay was found to be 100% when RNA from other viruses or healthy
A reduced-complexity model for sediment transport and step-pool morphology
Saletti, Matteo; Molnar, Peter; Hassan, Marwan A.; Burlando, Paolo
2016-07-01
A new particle-based reduced-complexity model to simulate sediment transport and channel morphology in steep streams in presented. The model CAST (Cellular Automaton Sediment Transport) contains phenomenological parameterizations, deterministic or stochastic, of sediment supply, bed load transport, and particle entrainment and deposition in a cellular-automaton space with uniform grain size. The model reproduces a realistic bed morphology and typical fluctuations in transport rates observed in steep channels. Particle hop distances, from entrainment to deposition, are well fitted by exponential distributions, in agreement with field data. The effect of stochasticity in both the entrainment and the input rate is shown. A stochastic parameterization of the entrainment is essential to create and maintain a realistic channel morphology, while the intermittent transport of grains in CAST shreds the input signal and its stochastic variability. A jamming routine has been added to CAST to simulate the grain-grain and grain-bed interactions that lead to particle jamming and step formation in a step-pool stream. The results show that jamming is effective in generating steps in unsteady conditions. Steps are created during high-flow periods and they survive during low flows only in sediment-starved conditions, in agreement with the jammed-state hypothesis of Church and Zimmermann (2007). Reduced-complexity models like CAST give new insights into the dynamics of complex phenomena such as sediment transport and bedform stability and are a useful complement to fully physically based models to test research hypotheses.
De Raedt, H; Michielsen, K; Kole, JS; Figge, MT
2003-01-01
We present a one-step algorithm that solves the Maxwell equations for systems with spatially varying permittivity and permeability by the Chebyshev method. We demonstrate that this algorithm may be orders of magnitude more efficient than current finite-difference time-domain (FDTD) algorithms.
Verrière, Marc; Dubray, Noël; Schunck, Nicolas; Regnier, David; Dossantos-Uzarralde, Pierre
2017-09-01
Dynamical description of low energy fission is, in our full microscopic approach, decomposed in two steps. In the first step we generate the Potential Energy Surface (PES) of the compound system we want to describe with the Hartree-Fock-Bogoliubov (HFB) method with a Gogny interaction. The second step uses the Time Dependent Generator Coordinate Method (TDGCM) with the Gaussian Overlap Approximation (GOA). The GOA holds in two assumptions: the overlap matrix between HFB states has a gaussian shape (with respect to the difference between coordinates of states in deformation space); and the expectation value of the collective hamiltonian between these states can be expanded up to order two, leading in this case to a Schrödinger-like equation. In this work we replace TDGCM+GOA in the second step of our approach by an exact treatment of the TDGCM. The main equation of this method is the time-dependent Hill-Wheeler equation and involves two objects: the overlap matrix and the collective hamiltonian. We first calculate these matrices on a PES. Then, we build an "exact TDGCM" solver using a finite element method and a Crank-Nicolson scheme. In this talk, we will present the time-dependent Hill-Wheeler equation and discretization schemes (in time and deformation space). The analytic calculation of overlap matrix and collective hamiltonian will be detailed. Finally, first results with an exact treatment of the TDGCM will be discussed.
Long memory of financial time series and hidden Markov models with time-varying parameters
DEFF Research Database (Denmark)
Nystrup, Peter; Madsen, Henrik; Lindström, Erik
facts have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time-varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared...... daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step predictions....
Integrated modeling of high poloidal beta scenario for a next-step reactor
McClenaghan, J.; Garofalo, A. M.; Meneghini, O.; Smith, S. P.
2015-11-01
In order to fill the scientific and technological gaps between ITER and a nuclear fusion power plant DEMO, a next-step integrated nuclear test facility is critical. A high poloidal beta tokamak regime investigated in recent DIII-D experiments is a promising candidate for steady state operation in such a next-step device because the large bootstrap current fraction (~ 80 %) reduces the demands on the external current drive. Despite the large values of q95 ~10, the normalized fusion performance observed in the experiments meet the target for an economically attractive fusion power plant such as ARIES-ACT2. In this work, we will project the performance for a conducting and superconducting coil next-step steady state reactor using theory-based 0-D modeling and full 1.5D transport modeling. Work supported by U.S. DOE under DE-FC02-04ER54698.
Analogue modelling of the effect of topographic steps in the development of strike-slip faults
Tomás, Ricardo; Duarte, João C.; Rosas, Filipe M.; Schellart, Wouter; Strak, Vincent
2016-04-01
Strike-slip faults often cut across regions of overthickened crust, such as oceanic plateaus or islands. These morphological steps likely cause a local variation in the stress field that controls the geometry of these systems. Such variation in the stress field will likely play a role in strain localization and associated seismicity. This is of particular importance since wrench systems can produce very high magnitude earthquakes. However, such systems have been generally overlooked and are still poorly understood. In this work we will present a set of analogue models that were designed with the objective of understanding how a step in the morphology affects the development of a strike-slip fault system. The models consist of a sand-cake with two areas with different thicknesses connected by a gentle ramp perpendicular to a dextral strike-slip basal fault. The sand-cake lies above two basal plates to which the dextral relative motion was imposed using a stepping-motor. Our results show that a Riedel fault system develops across the two flat areas. However, a very asymmetric fault pattern develops across the morphological step. A deltoid constrictional bulge develops in the thinner part of the model, which progressively acquires a sigmoidal shape with increasing offset. In the thicker part of the domain, the deformation is mostly accommodated by Riedel faults and the one closer to the step acquires a relatively lower angle. Associated to this Riedel fault a collapse area develops and amplifies with increasing offset. For high topographic steps, the propagation of the main fault across the step area only occurs in the final stages of the experiments, contrary to what happens when the step is small or inexistent. These results strongly suggest a major impact of the variation of topography on the development of strike-slip fault systems. The step in the morphology causes variations in the potential energy that changes the local stress field (mainly the vertical
Data sensitivity in a hybrid STEP/Coulomb model for aftershock forecasting
Steacy, S.; Jimenez Lloret, A.; Gerstenberger, M.
2014-12-01
Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip, and we also examine how the choice of receiver plane geometry affects the results. We find that the results are strongly sensitive to the slip models and moderately sensitive to the choice of receiver orientation. We further find that comparison of the stress fields (resulting from the slip models) with the location of events in the learning period provides advance information on whether or not a particular hybrid model will perform better than STEP.
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Energy Technology Data Exchange (ETDEWEB)
Meyer, Chad D.; Balsara, Dinshaw S. [Physics Department, Univ. of Notre Dame, 225 Nieuwland Science Hall, Notre Dame, IN 46556 (United States); Aslam, Tariq D. [WX-9 Group, Los Alamos National Laboratory, MS P952, Los Alamos, NM 87545 (United States)
2014-01-15
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very
An, Taechang; Kim, Ki Su; Hahn, Sei Kwang; Lim, Geunbae
2010-08-21
Aptamer functionalized addressable SWNT-film arrays between cantilever electrodes were successfully developed for biosensor applications. Dielectrophoretically aligned SWNT suspended films made possible highly specific and rapid detection of target proteins with a large binding surface area. Thrombin aptamer immobilized SWNT-film FET biosensor resulted in a real-time, label-free, and electrical detection of thrombin molecules down to a concentration of ca. 7 pM with a step-wise rapid response time of several seconds.
Martino, K G; Marks, B P
2007-12-01
Two different microbial modeling procedures were compared and validated against independent data for Listeria monocytogenes growth. The most generally used method is two consecutive regressions: growth parameters are estimated from a primary regression of microbial counts, and a secondary regression relates the growth parameters to experimental conditions. A global regression is an alternative method in which the primary and secondary models are combined, giving a direct relationship between experimental factors and microbial counts. The Gompertz equation was the primary model, and a response surface model was the secondary model. Independent data from meat and poultry products were used to validate the modeling procedures. The global regression yielded the lower standard errors of calibration, 0.95 log CFU/ml for aerobic and 1.21 log CFU/ml for anaerobic conditions. The two-step procedure yielded errors of 1.35 log CFU/ml for aerobic and 1.62 log CFU/ ml for anaerobic conditions. For food products, the global regression was more robust than the two-step procedure for 65% of the cases studied. The robustness index for the global regression ranged from 0.27 (performed better than expected) to 2.60. For the two-step method, the robustness index ranged from 0.42 to 3.88. The predictions were overestimated (fail safe) in more than 50% of the cases using the global regression and in more than 70% of the cases using the two-step regression. Overall, the global regression performed better than the two-step procedure for this specific application.
A dissociative fluorescence enhancement technique for one-step time-resolved immunoassays
Energy Technology Data Exchange (ETDEWEB)
Blomberg, Kaj R.; Mukkala, Veli-Matti; Hakala, Harri H.O.; Maekinen, Pauliina H.; Suonpaeae, Mikko U.; Hemmilae, Ilkka A. [PerkinElmer Inc, Wallac Oy, P.O. Box 10, Turku (Finland)
2011-02-15
The limitation of current dissociative fluorescence enhancement techniques is that the lanthanide chelate structures used as molecular probes are not stable enough in one-step assays with high concentrations of complexones or metal ions in the reaction mixture since these substances interfere with lanthanide chelate conjugated to the detector molecule. Lanthanide chelates of diethylenetriaminepentaacetic acid (DTPA) are extremely stable, and we used EuDTPA derivatives conjugated to antibodies as tracers in one-step immunoassays containing high concentrations of complexones or metal ions. Enhancement solutions based on different {beta}-diketones were developed and tested for their fluorescence-enhancing capability in immunoassays with EuDTPA-labelled antibodies. Characteristics tested were fluorescence intensity, analytical sensitivity, kinetics of complex formation and signal stability. Formation of fluorescent complexes is fast (5 min) in the presented enhancement solution with EuDTPA probes withstanding strong complexones (ethylenediaminetetra acetate (EDTA) up to 100 mM) or metal ions (up to 200 {mu}M) in the reaction mixture, the signal is intensive, stable for 4 h and the analytical sensitivity with Eu is 40 fmol/L, Tb 130 fmol/L, Sm 2.1 pmol/L and Dy 8.5 pmol/L. With the improved fluorescence enhancement technique, EDTA and citrate plasma samples as well as samples containing relatively high concentrations of metal ions can be analysed using a one-step immunoassay format also at elevated temperatures. It facilitates four-plexing, is based on one chelate structure for detector molecule labelling and is suitable for immunoassays due to the wide dynamic range and the analytical sensitivity. (orig.)
Directory of Open Access Journals (Sweden)
Su KG
2017-02-01
Full Text Available Kimmy G Su,1 Julie H Carter,1 Keiran K Tuck,2 Tony Borcich,3 Linda A Bryans,4 Lisa L Mann,1 Jennifer L Wilhelm,5 Erik K Fromme6 1Department of Neurology, Oregon Health & Science University, 2The Oregon Clinic-Neurology, 3Parkinson’s Resources of Oregon, 4Northwest Clinic for Voice and Swallowing, Oregon Health and Science University, 5Rehabilitation Services, Oregon Health & Science University, 6Palliative Care Section, OHSU Knight Cancer Institute, Portland, OR, USA Abstract: Late stage Parkinson’s and Parkinson-plus patients have increased needs beyond motor symptom management that cannot be fully addressed in a typical neurology clinic visit. Complicating matters are the concurrent increasing emotional and physical demands on caregivers, which, if addressed, further stretch clinic time constraints. The complex and extensive patient and caregiver needs warrant a dedicated clinic to provide the necessary interdisciplinary care. In contrast to a typical model where the neurology clinician refers the patient to various ancillary treatment groups resulting in multiple separate clinic visits, the interdisciplinary model supports direct communication between the different disciplines during the clinic visit, allowing for a more coordinated response that takes into account multiple perspectives. Such an interdisciplinary model has been utilized in neurologic disorders with complex end-stage disease needs, such as amyotrophic lateral sclerosis with notable improvement in quality of life and survival. The Oregon Health & Science University Parkinson Center and Movement Disorders Clinic has developed an interdisciplinary clinic called Next Step composed of neurology clinicians, a physical therapist, a speech pathologist, a social worker, and a nursing coordinator. The clinic focuses on palliative care issues, including complex late stage motor symptoms, nonmotor symptoms, and quality of life goals of both the patient and caregiver(s. This article
Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.
2014-01-01
The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…
Student Responses to a Context- and Inquiry-Based Three-Step Teaching Model
Walan, Susanne; Rundgren, Shu-Nu Chang
2015-01-01
Research has indicated that both context- and inquiry-based approaches could increase student interest in learning sciences. This case study aims to present a context- and inquiry-based combined teaching approach, using a three-step teaching model developed by the PROFILES project, and investigates Swedish students' responses to the activity. A…
Student Responses to a Context- and Inquiry-Based Three-Step Teaching Model
Walan, Susanne; Rundgren, Shu-Nu Chang
2015-01-01
Research has indicated that both context- and inquiry-based approaches could increase student interest in learning sciences. This case study aims to present a context- and inquiry-based combined teaching approach, using a three-step teaching model developed by the PROFILES project, and investigates Swedish students' responses to the activity. A…
Linear system identification via backward-time observer models
Juang, Jer-Nan; Phan, Minh
1993-01-01
This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.
Phase-step calibration technique based on a two-run-times-two-frame phase-shift method.
Zhong, Xianghong
2006-12-10
A novel phase-step calibration technique is presented on the basis of a two-run-times-two-frame phase-shift method. First the symmetry factor M is defined to describe the distribution property of the distorted phase due to phase-shifter miscalibration; then the phase-step calibration technique, in which two sets of two interferograms with a straight fringe pattern are recorded and the phase step is obtained by calculating M of the wrapped phase map, is developed. With this technique, a good mirror is required, but no uniform illumination is needed and no complex mathematical operation is involved. This technique can be carried out in situ and is applicable to any phase shifter, whether linear or nonlinear.
Patrone, Paul; Einstein, T. L.; Margetis, Dionisios
2011-03-01
We study a 1+1D, stochastic, Burton-Cabrera-Frank (BCF) model of interacting steps fluctuating on a vicinal crystal. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. Our goal is to formulate and validate a self-consistent mean-field (MF) formalism to approximately solve the system of coupled, nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. We derive formulas for the time-dependent terrace width distribution (TWD) and its steady-state limit. By comparison with kinetic Monte-Carlo simulations, we show that our MF formalism improves upon models in which step interactions are linearized. We also indicate how fitting parameters of our steady state MF TWD may be used to determine the mass transport regime and step interaction energy of certain experimental systems. PP and TLE supported by NSF MRSEC under Grant DMR 05-20471 at U. of Maryland; DM supported by NSF under Grant DMS 08-47587.
2015-06-01
HIGHER-ORDER TREATMENTS OF BOUNDARY CONDITIONS IN SPLIT-STEP FOURIER PARABOLIC EQUATION MODELS by Savas Erdim June 2015 Thesis Advisor...CONDITIONS IN SPLIT-STEP FOURIER PARABOLIC EQUATION MODELS 5. FUNDING NUMBERS 6. AUTHOR(S) Savas Erdim 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES... Parabolic equation models solved using the split-step Fourier (SSF) algorithm, such as the Monterey Miami Parabolic Equation model, are commonly used
Dessens, Olivier
2016-04-01
Integrated Assessment Models (IAMs) are used as crucial inputs to policy-making on climate change. These models simulate aspect of the economy and climate system to deliver future projections and to explore the impact of mitigation and adaptation policies. The IAMs' climate representation is extremely important as it can have great influence on future political action. The step-function-response is a simple climate model recently developed by the UK Met Office and is an alternate method of estimating the climate response to an emission trajectory directly from global climate model step simulations. Good et al., (2013) have formulated a method of reconstructing general circulation models (GCMs) climate response to emission trajectories through an idealized experiment. This method is called the "step-response approach" after and is based on an idealized abrupt CO2 step experiment results. TIAM-UCL is a technology-rich model that belongs to the family of, partial-equilibrium, bottom-up models, developed at University College London to represent a wide spectrum of energy systems in 16 regions of the globe (Anandarajah et al. 2011). The model uses optimisation functions to obtain cost-efficient solutions, in meeting an exogenously defined set of energy-service demands, given certain technological and environmental constraints. Furthermore, it employs linear programming techniques making the step function representation of the climate change response adapted to the model mathematical formulation. For the first time, we have introduced the "step-response approach" method developed at the UK Met Office in an IAM, the TIAM-UCL energy system, and we investigate the main consequences of this modification on the results of the model in term of climate and energy system responses. The main advantage of this approach (apart from the low computational cost it entails) is that its results are directly traceable to the GCM involved and closely connected to well-known methods of
Physical modelling and scale effects of air-water flows on stepped spillways
Institute of Scientific and Technical Information of China (English)
CHANSON Hubert; GONZALEZ Carlos A.
2005-01-01
During the last three decades, the introduction of new construction materials (e.g. RCC (Roller Compacted Concrete),strengthened gabions) has increased the interest for stepped channels and spillways. However stepped chute hydraulics is not simple, because of different flow regimes and importantly because of very-strong interactions between entrained air and turbulence. In this study, new air-water flow measurements were conducted in two large-size stepped chute facilities with two step heights in each facility to study experimental distortion caused by scale effects and the soundness of result extrapolation to prototypes. Experimental data included distributions of air concentration, air-water flow velocity, bubble frequency, bubble chord length and air-water flow turbulence intensity. For a Froude similitude, the results implied that scale effects were observed in both facilities, although the geometric scaling ratio was only Lr=2 in each case. The selection of the criterion for scale effects is a critical issue. For example, major differences (i.e. scale effects) were observed in terms of bubble chord sizes and turbulence levels although little scale effects were seen in terms of void fraction and velocity distributions. Overall the findings emphasize that physical modelling of stepped chutes based upon a Froude similitude is more sensitive to scale effects than classical smooth-invert chute studies, and this is consistent with basic dimensional analysis developed herein.
Detailed models for timing and efficiency in resistive plate chambers
Riegler, Werner
2003-01-01
We discuss detailed models for detector physics processes in Resistive Plate Chambers, in particular including the effect of attachment on the avalanche statistics. In addition, we present analytic formulas for average charges and intrinsic RPC time resolution. Using a Monte Carlo simulation including all the steps from primary ionization to the front-end electronics we discuss the dependence of efficiency and time resolution on parameters like primary ionization, avalanche statistics and threshold.
Multi-step polynomial regression method to model and forecast malaria incidence.
Directory of Open Access Journals (Sweden)
Chandrajit Chatterjee
Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city
Directory of Open Access Journals (Sweden)
Benoit Mallet
2013-01-01
Full Text Available We present a local spatial approximation or p-strategy Discontinuous Galerkin method to solve the time-domain Maxwell equations. First, the Discontinuous Galerkin method with a local time-stepping strategy is recalled. Next, in order to increase the efficiency of the method, a local spatial approximation strategy is introduced and studied. While preserving accuracy and by using different spatial approximation orders for each cell, this strategy is very efficient to reduce the computational time and the required memory in numerical simulations using very distorted meshes. Several numerical examples are given to show the interest and the capacity of such method.
Initial Steps Toward a Hydrologic "Watershed" Model for the Ablation Zone of the Greenland Ice Sheet
Cooper, M. G.; Smith, L. C.; Rennermalm, A. K.; Pitcher, L. H.; Overstreet, B. T.; Chu, V. W.; Ryan, J.; Yang, K.
2015-12-01
Surface meltwater production on the Greenland Ice Sheet (GrIS) is a well-documented phenomenon but we lack understanding of the physical mechanisms that control the production, transport, and fate of the meltwater. To address this, we present initial steps toward the development of a novel hydrologic model for supraglacial streamflow on the GrIS. Ice ablation and surface meteorology were measured during a 6-day field campaign in a 112 km2 ablation zone of southwest Greenland. We modeled ablation using SnowModel, an energy balance snow- and ice-ablation model. The required model inputs included standard surface meteorology and a digital elevation model (DEM), and the model outputs include all components of the energy balance and surface meltwater production for each grid cell in the ice-sheet watershed. Our next steps toward developing a complete hydrologic model for supraglacial streamflow in the ablation zone of the GrIS include the application of the meltwater-routing model HydroFlow to compare with in-situ measurements of supraglacial river discharge.
Uchiyama, Takanori; Uchida, Ryusei
The purpose of this study is to develop a new modeling technique for quantitative evaluation of spasticity in the upper limbs of hemiplegic patients. Each subject lay on a bed, and his forearm was supported with a jig to measure the elbow joint angle. The subject was instructed to relax and not to resist the step-like load which was applied to extend the elbow joint. The elbow joint angle and electromyogram (EMG) of the biceps muscle, triceps muscle and brachioradialis muscle were measured. First, the step-like response was approximated with a proposed mathematical model based on musculoskeletal and physiological characteristics by the least square method. The proposed model involved an elastic component depending on both muscle activities and elbow joint angle. The responses were approximated well with the proposed model. Next, the torque generated by the elastic component was estimated. The normalized elastic torque was approximated with a dumped sinusoid by the least square method. The reciprocal of the time constant and the natural frequency of the normalized elastic torque were calculated and they varied depending on the grades of the modified Ashworth scale of the subjects. It was suggested that the proposed modeling technique would provide a good quantitative index of spasticity as shown in the relationship between the reciprocal of the time constant and the natural frequency.
Shaik, O. S.; Kammerer, J.; Gorecki, J.; Lebiedz, D.
2005-12-01
Accurate experimental data increasingly allow the development of detailed elementary-step mechanisms for complex chemical and biochemical reaction systems. Model reduction techniques are widely applied to obtain representations in lower-dimensional phase space which are more suitable for mathematical analysis, efficient numerical simulation, and model-based control tasks. Here, we exploit a recently implemented numerical algorithm for error-controlled computation of the minimum dimension required for a still accurate reduced mechanism based on automatic time scale decomposition and relaxation of fast modes. We determine species contributions to the active (slow) dynamical modes of the reaction system and exploit this information in combination with quasi-steady-state and partial-equilibrium approximations for explicit model reduction of a novel detailed chemical mechanism for the Ru-catalyzed light-sensitive Belousov-Zhabotinsky reaction. The existence of a minimum dimension of seven is demonstrated to be mandatory for the reduced model to show good quantitative consistency with the full model in numerical simulations. We derive such a maximally reduced seven-variable model from the detailed elementary-step mechanism and demonstrate that it reproduces quantitatively accurately the dynamical features of the full model within a given accuracy tolerance.
Energy Technology Data Exchange (ETDEWEB)
Gavrea, B. I.; Anitescu, M.; Potra, F. A.; Mathematics and Computer Science; Univ. of Pennsylvania; Univ. of Maryland
2008-01-01
In this work we present a framework for the convergence analysis in a measure differential inclusion sense of a class of time-stepping schemes for multibody dynamics with contacts, joints, and friction. This class of methods solves one linear complementarity problem per step and contains the semi-implicit Euler method, as well as trapezoidal-like methods for which second-order convergence was recently proved under certain conditions. By using the concept of a reduced friction cone, the analysis includes, for the first time, a convergence result for the case that includes joints. An unexpected intermediary result is that we are able to define a discrete velocity function of bounded variation, although the natural discrete velocity function produced by our algorithm may have unbounded variation.
Benoit, Commercon; Romain, Teyssier
2014-01-01
Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. We present a new method for implicit adaptive time-stepping on adaptive mesh refinement-grids and implementing it in the radiation hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. We briefly recall the radiation hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation hydrodynamics tests, after which we present an application for protostellar collapse. We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it ca...
Time-stepped & discrete-event simulations of electromagnetic propulsion systems Project
National Aeronautics and Space Administration — We propose to develop a new generation of electromagnetic simulation codes with mixed resolution modeling capabilities. The need for such codes arises in many fields...
Time-stepped & discrete-event simulations of electromagnetic propulsion systems Project
National Aeronautics and Space Administration — The existing plasma codes are ill suited for modeling of mixed resolution problems, such as the plasma sail, where the system under study comprises subsystems with...
Machine learning strategies for multi-step-ahead time series forecasting
Ben Taieb, Souhaib
2014-01-01
How much electricity is going to be consumed for the next 24 hours? What will be the temperature for the next three days? What will be the number of sales of a certain product for the next few months? Answering these questions often requires forecasting several future observations from a given sequence of historical observations, called a time series. Historically, time series forecasting has been mainly studied in econometrics and statistics. In the last two decades, machine learning, a fiel...
Energy Technology Data Exchange (ETDEWEB)
Melintescu, A.; Galeriu, D. [' Horia Hulubei' National Institute for Physics and Nuclear Engineering, Bucharest-Magurele (Romania); Diabate, S.; Strack, S. [Institute of Toxicology and Genetics, Karlsruhe Institute of Technology - KIT, Eggenstein-Leopoldshafen (Germany)
2015-03-15
The processes involved in tritium transfer in crops are complex and regulated by many feedback mechanisms. A full mechanistic model is difficult to develop due to the complexity of the processes involved in tritium transfer and environmental conditions. First, a review of existing models (ORYZA2000, CROPTRIT and WOFOST) presenting their features and limits, is made. Secondly, the preparatory steps for a robust model are discussed, considering the role of dry matter and photosynthesis contribution to the OBT (Organically Bound Tritium) dynamics in crops.
Modeling and Real-Time Simulation of UPFC
Kimura, Misao; Takahashi, Choei; Kishibe, Hideto; Miyazaki, Yasuyuki; Noro, Yasuhiro; Iio, Naotaka
We have developed a digital real time simulator of Power Electronics Controllers, so called FACTS (Flexible AC Transmission Systems) Controllers and/or Custom Power by using MATLABTM/SIMULINKTM and dSPACETM System. This paper describes the modeling and the calculation accuracy of a UPFC (Unified Power Flow Controller) model. Hence the developed simulator operates at a large time step, in order to improve simulation accuracy, a correction processing of the switching delay is implemented into the UPFC model. Calculation accuracy of the real time UPFC model is the same level as EMTDCTM results. We confirm stable operation of the developed UPFC model with connecting a commercial real time digital simulator (RTDSTM).
Directory of Open Access Journals (Sweden)
Fitzpatrick, Josée
2016-01-01
Full Text Available Dyadic data analysis with distinguishable dyads assesses the variance, not only between dyads, but also within the dyad when members are distinguishable on a known variable. In past research, the Actor-Partner Interdependence Model (APIM has been the statistical model of choice in order to take into account this interdependence. Although this method has received considerable interest in the past decade, to our knowledge, no specific guide or tutorial exists to describe how to test an APIM model. In order to close this gap, this article will provide researchers with a step-by-step tutorial for assessing the most recent advancements of the APIM with the use of structural equation modeling (SEM. The present tutorial will also utilize the statistical program MPLUS.
Nicolaides, Antonis; Soulimane, Tewfik; Varotsis, Constantinos
2016-01-01
Time-resolved step-scan FTIR spectroscopy has been employed to probe the dynamics of the ba3 oxidoreductase from Thermus thermophilus in the ns-μs time range and in the pH/pD 6–9 range. The data revealed a pH/pD sensitivity of the D372 residue and of the ring-A propionate of heme a3. Based on the observed transient changes a model in which the protonic connectivity of w941-w946-927 to the D372 and the ring-A propionate of heme a3 is described. PMID:27690021
Mulder, W. A.; Zhebel, E.; Minisini, S.
2014-02-01
We analyse the time-stepping stability for the 3-D acoustic wave equation, discretized on tetrahedral meshes. Two types of methods are considered: mass-lumped continuous finite elements and the symmetric interior-penalty discontinuous Galerkin method. Combining the spatial discretization with the leap-frog time-stepping scheme, which is second-order accurate and conditionally stable, leads to a fully explicit scheme. We provide estimates of its stability limit for simple cases, namely, the reference element with Neumann boundary conditions, its distorted version of arbitrary shape, the unit cube that can be partitioned into six tetrahedra with periodic boundary conditions and its distortions. The Courant-Friedrichs-Lewy stability limit contains an element diameter for which we considered different options. The one based on the sum of the eigenvalues of the spatial operator for the first-degree mass-lumped element gives the best results. It resembles the diameter of the inscribed sphere but is slightly easier to compute. The stability estimates show that the mass-lumped continuous and the discontinuous Galerkin finite elements of degree 2 have comparable stability conditions, whereas the mass-lumped elements of degree one and three allow for larger time steps.
Mathematical Models of Waiting Time.
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Considered are several mathematical models that can be used to study different waiting situations. Problems involving waiting at a red light, bank, restaurant, and supermarket are discussed. A computer program which may be used with these problems is provided. (CW)
Ruin Probability in Linear Time Series Model
Institute of Scientific and Technical Information of China (English)
ZHANG Lihong
2005-01-01
This paper analyzes a continuous time risk model with a linear model used to model the claim process. The time is discretized stochastically using the times when claims occur, using Doob's stopping time theorem and martingale inequalities to obtain expressions for the ruin probability as well as both exponential and non-exponential upper bounds for the ruin probability for an infinite time horizon. Numerical results are included to illustrate the accuracy of the non-exponential bound.
Tank tests of two models of flying-boat hulls to determine the effect of ventilating the step
Dawson, John R
1937-01-01
The results of tests made in the N.A.C.A. tank on two models of flying-boat hulls to determine the effect of ventilating the step are given graphically. The step of N.A.C.A. model 11-C was ventilated in several different ways and it was found that the resistance of the normal form is not appreciably affected by artificial ventilation in any of the forms tried. Further tests made with the depth of the step of model 11-C reduced likewise show no appreciable effect on the resistance from ventilation of the step. Tests were made on a model of the hull of the Navy P3M-1 flying-boat hull both with and without ventilation of the step. It was found that the discontinuity which is obtained in the resistance curves of this model is eliminated by ventilating the step.
Development of a three dimensional circulation model based on fractional step method
Directory of Open Access Journals (Sweden)
Mazen Abualtayef
2010-03-01
Full Text Available A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic and thermodynamic results were predicted. The numerically predicted amplitudes and phase angles were well consistent with the field observations.
FIRST STEPS TOWARDS AN INTEGRATED CITYGML-BASED 3D MODEL OF VIENNA
Directory of Open Access Journals (Sweden)
G. Agugiaro
2016-06-01
This paper reports about the experiences done so far, it describes the test area and the available data sources, it shows and exemplifies the data integration issues, the strategies developed to solve them in order to obtain the integrated 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.
Halpern, Laurence; Japhet, Caroline
2010-01-01
We design and analyze a Schwarz waveform relaxation algorithm for domain decomposition of advection-diffusion-reaction problems with strong heterogeneities. The interfaces are curved, and we use optimized Robin or Ventcell transmission conditions. We analyze the semi-discretization in time with Discontinuous Galerkin as well. We also show two-dimensional numerical results using generalized mortar finite elements in space.
Bochev, Mikhail A.; Oseledets, I.V.; Tyrtyshnikov, E.E.
2013-01-01
The aim of this paper is two-fold. First, we propose an efficient implementation of the continuous time waveform relaxation method based on block Krylov subspaces. Second, we compare this new implementation against Krylov subspace methods combined with the shift and invert technique.
Dynamic models in space and time
Elhorst, J.P.
2001-01-01
This paper presents a first-order autoregressive distributed lag model in both space and time. It is shown that this model encompasses a wide series of simpler models frequently used in the analysis of space-time data as well as models that better fit the data and have never been used before. A fram
A stepped leader model for lightning including charge distribution in branched channels
Energy Technology Data Exchange (ETDEWEB)
Shi, Wei; Zhang, Li [School of Electrical Engineering, Shandong University, Jinan 250061 (China); Li, Qingmin, E-mail: lqmeee@ncepu.edu.cn [Beijing Key Lab of HV and EMC, North China Electric Power University, Beijing 102206 (China); State Key Lab of Alternate Electrical Power System with Renewable Energy Sources, Beijing 102206 (China)
2014-09-14
The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.
Erion Periku; Algest Aga
2015-01-01
One of the most important factors influencing the decision whether and how a tunnel is to be built are the estimated time and costs of construction. This study is based on construction time analysis for different steps in drill-and-blast method of hydro power tunnel excavation in working phase of 6.256,00 meters of tunnels which have different diameters varying from 4,20 to 7,60. There are made 737 field measurements and it is seen that many of the machinery and workmanship produc...
Directory of Open Access Journals (Sweden)
Joana G. Aguiar
2017-09-01
Full Text Available Training users in the concept mapping technique is critical for ensuring a high-quality concept map in terms of graphical structure and content accuracy. However, assessing excellence in concept mapping through structural and content features is a complex task. This paper proposes a two-step sequential training in concept mapping. The first step requires the fulfilment of low-order cognitive objectives (remember, understand and apply to facilitate novices’ development into good Cmappers by honing their knowledge representation skills. The second step requires the fulfilment of high-order cognitive objectives (analyse, evaluate and create to grow good Cmappers into excellent ones through the development of knowledge modelling skills. Based on Bloom’s revised taxonomy and cognitive load theory, this paper presents theoretical accounts to (1 identify the criteria distinguishing good and excellent concept maps, (2 inform instructional tasks for concept map elaboration and (3 propose a prototype for training users on concept mapping combining online and face-to-face activities. The proposed training application and the institutional certification are the next steps for the mature use of concept maps for educational as well as business purposes.
Time lags in biological models
MacDonald, Norman
1978-01-01
In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...
Energy Technology Data Exchange (ETDEWEB)
Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim
2017-02-03
A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail
signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.
Effect of moisture and drying time on the bond strength of the one-step self-etching adhesive system
Directory of Open Access Journals (Sweden)
Yoon Lee
2012-08-01
Full Text Available Objectives To investigate the effect of dentin moisture degree and air-drying time on dentin-bond strength of two different one-step self-etching adhesive systems. Materials and Methods Twenty-four human third molars were used for microtensile bond strength testing of G-Bond and Clearfil S3 Bond. The dentin surface was either blot-dried or air-dried before applying these adhesive agents. After application of the adhesive agent, three different air drying times were evaluated: 1, 5, and 10 sec. Composite resin was build up to 4 mm thickness and light cured for 40 sec with 2 separate layers. Then the tooth was sectioned and trimmed to measure the microtensile bond strength using a universal testing machine. The measured bond strengths were analyzed with three-way ANOVA and regression analysis was done (p = 0.05. Results All three factors, materials, dentin wetness and air drying time, showed significant effect on the microtensile bond strength. Clearfil S3 Bond, dry dentin surface and 10 sec air drying time showed higher bond strength. Conclusions Within the limitation of this experiment, air drying time after the application of the one-step self-etching adhesive agent was the most significant factor affecting the bond strength, followed by the material difference and dentin moisture before applying the adhesive agent.
Energy Technology Data Exchange (ETDEWEB)
Finn, John M., E-mail: finn@lanl.gov [T-5, Applied Mathematics and Plasma Physics, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
Bouzat, Sebastián
2016-01-01
One-dimensional models coupling a Langevin equation for the cargo position to stochastic stepping dynamics for the motors constitute a relevant framework for analyzing multiple-motor microtubule transport. In this work we explore the consistence of these models focusing on the effects of the thermal noise. We study how to define consistent stepping and detachment rates for the motors as functions of the local forces acting on them in such a way that the cargo velocity and run-time match previously specified functions of the external load, which are set on the base of experimental results. We show that due to the influence of the thermal fluctuations this is not a trivial problem, even for the single-motor case. As a solution, we propose a motor stepping dynamics which considers memory on the motor force. This model leads to better results for single-motor transport than the approaches previously considered in the literature. Moreover, it gives a much better prediction for the stall force of the two-motor case, highly compatible with the experimental findings. We also analyze the fast fluctuations of the cargo position and the influence of the viscosity, comparing the proposed model to the standard one, and we show how the differences on the single-motor dynamics propagate to the multiple motor situations. Finally, we find that the one-dimensional character of the models impede an appropriate description of the fast fluctuations of the cargo position at small loads. We show how this problem can be solved by considering two-dimensional models.
Hepatoprotective and anti-fibrotic agents: It’s time to take the next step
Directory of Open Access Journals (Sweden)
Ralf eWeiskirchen
2016-01-01
Full Text Available Hepatic fibrosis and cirrhosis cause strong human suffering and necessitate a monetary burden worldwide. Therefore, there is an urgent need for the development of therapies. Pre-clinical animal models are indispensable in the drug discovery and development of new anti-fibrotic compounds and are immensely valuable for understanding and proofing the mode of their proposed action. In fibrosis research, inbreed mice and rats are by far the most used species for testing drug efficacy. During the last decades, several hundred or even a thousand different drugs that reproducibly evolve beneficial effects on liver health in respective disease models were identified. However, there are only a few compounds (e.g. GR-MD-02, GM-CT-01 that were translated from bench to bedside. In contrast, the large number of drugs successfully tested in animal studies is repeatedly tested over and over engender findings with similar or identical outcome. This circumstance undermines the 3R (Replacement, Refinement, Reduction principle of Russell and Burch that was introduced to minimize the suffering of laboratory animals. This ethical framework, however, represents the basis of the new animal welfare regulations in the member states of the European Union. Consequently, the legal authorities in the different countries are halted to foreclose testing of drugs in animals that were successfully tested before. This review provides a synopsis on anti-fibrotic compounds that were tested in classical rodent models. Their mode of action, potential sources and the observed beneficial effects on liver health are discussed. This review attempts to provide a reference compilation for all those involved in the testing of drugs or in the design of new clinical trials targeting hepatic fibrosis.
Institute of Scientific and Technical Information of China (English)
钟伟民; 何国龙; 皮道映; 孙优贤
2005-01-01
A support vector machine (SVM) with quadratic polynomial kernel function based nonlinear model one-step-ahead predictive controller is presented. The SVM based predictive model is established with black-box identification method. By solving a cubic equation in the feature space, an explicit predictive control law is obtained through the predictive control mechanism. The effect of controller is demonstrated on a recognized benchmark problem and on the control of continuous-stirred tank reactor (CSTR). Simulation results show that SVM with quadratic polynomial kernel function based predictive controller can be well applied to nonlinear systems, with good performance in following reference trajectory as well as in disturbance-rejection.
Sensitivity of ensemble Lagrangian reconstructions to assimilated wind time step resolution
Directory of Open Access Journals (Sweden)
I. Pisso
2009-04-01
Full Text Available The aim of this study is to define the optimal temporal and spatial resolution required for accurate offline diffusive Lagrangian reconstructions of high resolution in-situ tracers measurements based on meteorological wind fields and on coarse resolution 3-D tracer distributions. Increasing the time resolution of the advecting winds from three to one hour intervals has a modest impact on diffusive reconstructions in the case studied. This result is discussed in terms of the effect on the geometry of transported clouds of points in order to set out a method to assess the effect of meteorological flow on the transport of atmospheric tracers.
One-step Real-time Food Quality Analysis by Simultaneous DSC-FTIR Microspectroscopy.
Lin, Shan-Yang; Lin, Chih-Cheng
2016-01-01
This review discusses an analytical technique that combines differential scanning calorimetry and Fourier-transform infrared (DSC-FTIR) microspectroscopy, which simulates the accelerated stability test and detects decomposition products simultaneously in real time. We show that the DSC-FTIR technique is a fast, simple and powerful analytical tool with applications in food sciences. This technique has been applied successfully to the simultaneous investigation of: encapsulated squid oil stability; the dehydration and intramolecular condensation of sweetener (aspartame); the dehydration, rehydration and solidification of trehalose; and online monitoring of the Maillard reaction for glucose (Glc)/asparagine (Asn) in the solid state. This technique delivers rapid and appropriate interpretations with food science applications.
First steps towards real-time radiography at the NECTAR facility
Energy Technology Data Exchange (ETDEWEB)
Buecherl, T. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)], E-mail: thomas.buecherl@radiochemie.de; Wagner, F.M. [Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II), Technische Universitaet Muenchen (Germany); Lierse von Gostomski, Ch. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)
2009-06-21
The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm{sup -2} s{sup -1} (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.
Kalman Filtered Daily GRACE Gravity Field Solutions in Near Real-Time- First Steps
Kvas, Andreas; Mayer-Gurr, Torsten
2016-08-01
As part of the EGSIEM (European Gravity Service for Improved Emergency Management) project, a technology demonstrator for a near real-time (NRT) gravity field service will be established. In preparation of the operational phase, several aspects of the daily gravity field processing chain at Graz University of Technology have been inspected in order to improve the gravity field solutions and move towards NRT. The effect of these adaptions is investigated by comparison with post-processing and forward-only filtered solutions and evaluated using in-situ data.
Salles, Loic; Gouskov, Alexandre; Jean, Pierrick; Thouverez, Fabrice
2014-01-01
Contact interfaces with dry friction are frequently used in turbomachinery. Dry friction damping produced by the sliding surfaces of these interfaces reduces the amplitude of bladed-disk vibration. The relative displacements at these interfaces lead to fretting-wear which reduces the average life expectancy of the structure. Frequency response functions are calculated numerically by using the multi-harmonic balance method (mHBM). The dynamic Lagrangian frequency-time method is used to calculate contact forces in the frequency domain. A new strategy for solving nonlinear systems based on dual time stepping is applied. This method is faster than using Newton solvers. It was used successfully for solving Nonlinear CFD equations in the frequency domain. This new approach allows identifying the steady state of worn systems by integrating wear rate equations a on dual time scale. The dual time equations are integrated by an implicit scheme. Of the different orders tested, the first order scheme provided the best re...
First Steps Towards AN Integrated Citygml-Based 3d Model of Vienna
Agugiaro, G.
2016-06-01
This paper presents and discusses the results regarding the initial steps (selection, analysis, preparation and eventual integration of a number of datasets) for the creation of an integrated, semantic, three-dimensional, and CityGML-based virtual model of the city of Vienna. CityGML is an international standard conceived specifically as information and data model for semantic city models at urban and territorial scale. It is being adopted by more and more cities all over the world. The work described in this paper is embedded within the European Marie-Curie ITN project "Ci-nergy, Smart cities with sustainable energy systems", which aims, among the rest, at developing urban decision making and operational optimisation software tools to minimise non-renewable energy use in cities. Given the scope and scale of the project, it is therefore vital to set up a common, unique and spatio-semantically coherent urban model to be used as information hub for all applications being developed. This paper reports about the experiences done so far, it describes the test area and the available data sources, it shows and exemplifies the data integration issues, the strategies developed to solve them in order to obtain the integrated 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.
Experimental investigation of initial steps of helix propagation in model peptides.
Goch, Grazyna; Maciejczyk, Maciej; Oleszczuk, Marta; Stachowiak, Damian; Malicka, Joanna; Bierzyński, Andrzej
2003-06-10
It is not certain whether the helix propagation parameters s(n)() (i.e., the equilibrium constants between (n - 1)- and n-residue long alpha-helices) determined from numerous studies of rather long model peptides are applicable for description of the initial steps of the helix formation during the protein folding process. From fluorescence, NMR, and calorimetric studies of a series of model peptides, containing the La(3+)-binding sequence nucleating the helix (Siedlecka, M., Goch, G., Ejchart, A., Sticht, H., and Bierzynski, A. (1999) Proc. Natl. Acad. Sci. U.S.A. 96, 903-908), we have determined, at 25 degrees C, the average values of the enthalpy DeltaH(n)() and of the helix growth parameters s(n)() describing the first four steps of helix propagation in polyalanine. The absolute values of the C-cap parameters, describing the contribution of the C-terminal residues to the helix free energy, have also been estimated for alanine (1.2 +/- 0.5) and NH(2) group (1.6 +/- 0.7). The initial four steps of the helix growth in polyalanine can be described by a common propagation parameter s = 1.54 +/- 0.04. The enthalpy DeltaH(n)() is also constant and equals -980 +/- 100 cal mol(-)(1).
Nam, Kwangho
2014-10-14
Development of multiscale ab initio quantum mechanical and molecular mechanical (AI-QM/MM) method for periodic boundary molecular dynamics (MD) simulations and their acceleration by multiple time step approach are described. The developed method achieves accuracy and efficiency by integrating the AI-QM/MM level of theory and the previously developed semiempirical (SE) QM/MM-Ewald sum method [J. Chem. Theory Comput. 2005, 1, 2] extended to the smooth particle-mesh Ewald (PME) summation method. In the developed methods, the total energy of the simulated system is evaluated at the SE-QM/MM-PME level of theory to include long-range QM/MM electrostatic interactions, which is then corrected on the fly using the AI-QM/MM level of theory within the real space cutoff. The resulting energy expression enables decomposition of total forces applied to each atom into forces determined at the low-level SE-QM/MM method and correction forces at the AI-QM/MM level, to integrate the system using the reversible reference system propagator algorithm. The resulting method achieves a substantial speed-up of the entire calculation by minimizing the number of time-consuming energy and gradient evaluations at the AI-QM/MM level. Test calculations show that the developed multiple time step AI-QM/MM method yields MD trajectories and potential of mean force profiles comparable to single time step QM/MM results. The developed method, together with message passing interface (MPI) parallelization, accelerates the present AI-QM/MM MD simulations about 30-fold relative to the speed of single-core AI-QM/MM simulations for the molecular systems tested in the present work, making the method less than one order slower than the SE-QM/MM methods under periodic boundary conditions.
In the time of significant generational diversity - surgical leadership must step up!
Money, Samuel R; O'Donnell, Mark E; Gray, Richard J
2014-02-01
The diverse attitudes and motivations of surgeons and surgical trainees within different age groups present an important challenge for surgical leaders and educators. These challenges to surgical leadership are not unique, and other industries have likewise needed to grapple with how best to manage these various age groups. The authors will herein explore management and leadership for surgeons in a time of age diversity, define generational variations within "Baby-Boomer", "Generation X" and "Generation Y" populations, and identify work ethos concepts amongst these three groups. The surgical community must understand and embrace these concepts in order to continue to attract a stellar pool of applicants from medical school. By not accepting the changing attitudes and motivations of young trainees and medical students, we may disenfranchise a high percentage of potential future surgeons. Surgical training programs will fill, but will they contain the highest quality trainees?
Connectionist and diffusion models of reaction time.
Ratcliff, R; Van Zandt, T; McKoon, G
1999-04-01
Two connectionist frameworks, GRAIN (J. L. McClelland, 1993) and brain-state-in-a-box (J. A. Anderson, 1991), and R. Ratcliff's (1978) diffusion model were evaluated using data from a signal detection task. Dependent variables included response probabilities, reaction times for correct and error responses, and shapes of reaction-time distributions. The diffusion model accounted for all aspects of the data, including error reaction times that had previously been a problem for all response-time models. The connectionist models accounted for many aspects of the data adequately, but each failed to a greater or lesser degree in important ways except for one model that was similar to the diffusion model. The findings advance the development of the diffusion model and show that the long tradition of reaction-time research and theory is a fertile domain for development and testing of connectionist assumptions about how decisions are generated over time.
Thermal modeling of step-out targets at the Soda Lake geothermal field, Churchill County, Nevada
Dingwall, Ryan Kenneth
Temperature data at the Soda Lake geothermal field in the southeastern Carson Sink, Nevada, highlight an intense thermal anomaly. The geothermal field produces roughly 11 MWe from two power producing facilities which are rated to 23 MWe. The low output is attributed to the inability to locate and produce sufficient volumes of fluid at adequate temperature. Additionally, the current producing area has experienced declining production temperatures over its 40 year history. Two step-out targets adjacent to the main field have been identified that have the potential to increase production and extend the life of the field. Though shallow temperatures in the two subsidiary areas are significantly less than those found within the main anomaly, measurements in deeper wells (>1,000 m) show that temperatures viable for utilization are present. High-pass filtering of the available complete Bouguer gravity data indicates that geothermal flow is present within the shallow sediments of the two subsidiary areas. Significant faulting is observed in the seismic data in both of the subsidiary areas. These structures are highlighted in the seismic similarity attribute calculated as part of this study. One possible conceptual model for the geothermal system(s) at the step-out targets indicated upflow along these faults from depth. In order to test this hypothesis, three-dimensional computer models were constructed in order to observe the temperatures that would result from geothermal flow along the observed fault planes. Results indicate that the observed faults are viable hosts for the geothermal system(s) in the step-out areas. Subsequently, these faults are proposed as targets for future exploration focus and step-out drilling.
The next step in real time data processing for large scale physics experiments
Paramesvaran, Sudarshan
2016-01-01
Run 2 of the LHC represents one of the most challenging scientific environments for real time data analysis and processing. The steady increase in instantaneous luminosity will result in the CMS detector producing around 150 TB/s of data, only a small fraction of which is useful for interesting Physics studies. During 2015 the CMS collaboration will be completing a total upgrade of its Level 1 Trigger to deal with these conditions. In this talk a description of the major components of this complex system will be described. This will include a discussion of custom-designed electronic processing boards, built to the uTCA specification with AMC cards based on Xilinx 7 FPGAs and a network of high-speed optical links. In addition, novel algorithms will be described which deliver excellent performance in FPGAs and are combined with highly stable software frameworks to ensure a minimal risk of downtime. This upgrade is planned to take data from 2016. However a system of parallel running has been developed that will ...
Time-Weighted Balanced Stochastic Model Reduction
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2011-01-01
A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recent...
Model checking timed automata : techniques and applications
Hendriks, Martijn.
2006-01-01
Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb
Lag space estimation in time series modelling
DEFF Research Database (Denmark)
Goutte, Cyril
1997-01-01
The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...
Analytical model of LDMOS with a single step buried oxide layer
Yuan, Song; Duan, Baoxing; Cao, Zhen; Guo, Haijun; Yang, Yintang
2016-09-01
In this paper, a two-dimensional analytical model is established for the Single-Step Buried Oxide SOI structure proposed by the authors. Based on the two-dimensional Poisson equation, the analytic expression of the surface electric field and potential distributions for the device is achieved. In the SBOSOI (Single-Step Buried Oxide Silicon On Insulator) structure, the buried oxide layer thickness changes stepwise along the drift region, and the electric field in the oxide layer also varies with the different buried oxide layer thickness. These variations will modulate the surface electric field distribution through the electric field modulation effects, which makes the surface electric field distribution more uniform. As a result, the breakdown voltage of the device is improved by 60% compared with the conventional SOI structure. To verify the accuracy of the analytical model, the device simulation software ISE TCAD is utilized, the analytical values are in good agreement with the simulation results by the simulation software. The results verified the established two-dimensional analytical model for SBOSOI structure is valid, and it also illustrates the breakdown voltage enhancement by the electric field modulation effect sufficiently. The established analytical models will provide the physical and mathematical basis for further analysis of the new power devices with the patterned buried oxide layer.
Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model
Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko
2015-04-01
One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1
Directory of Open Access Journals (Sweden)
Erion Periku
2015-01-01
Full Text Available One of the most important factors influencing the decision whether and how a tunnel is to be built are the estimated time and costs of construction. This study is based on construction time analysis for different steps in drill-and-blast method of hydro power tunnel excavation in working phase of 6.256,00 meters of tunnels which have different diameters varying from 4,20 to 7,60. There are made 737 field measurements and it is seen that many of the machinery and workmanship productions rates per unit time are significantly lower, varying from 35% to 50%, of that defined in their technical specifications, measurements indicate that highest performance is reached in 7,60m diameter tunnel excavation. It is believed that these data will be helpful for planning and management process of tunnel construction projects, especially those planned to be built in Albania where labor market carries similar features.
Park, Seol; Chung, Jun-Sub; Kong, Yong-Soo; Ko, Yu-Min; Park, Ji-Won
2015-09-01
[Purpose] We investigated the difference in onset time between the vastus medialis and lateralis according to knee alignment during stair ascent and descent to examine the effects of knee alignment on the quadriceps during stair stepping. [Subjects] Fifty-two adults (20 with genu varum, 12 with genu valgum, and 20 controls) were enrolled. Subjects with > 4 cm between the medial epicondyles of the knees were placed in the genu varum group, whereas subjects with > 4 cm between the medial malleolus of the ankle were placed in the genu valgum group. [Methods] Surface electromyography was used to measure the onset times of the vastus medialis and vastus lateralis during stair ascent and descent. [Results] The vastus lateralis showed more delayed firing than the vastus medialis in the genu varum group, whereas vastus medialis firing was more delayed than vastus lateralis firing in the genu valgum group. Significant differences in onset time were detected between stair ascent and descent in the genu varum and valgum groups. [Conclusion] Genu varum and valgum affect quadriceps firing during stair stepping. Therefore, selective rehabilitation training of the quadriceps femoris should be considered to prevent pain or knee malalignment deformities.
Brucker, G. A.; Ionov, S. I.; Chen, Y.; Wittig, C.
1992-07-01
Time-resolved, subpicosecond-resolution measurements of NO 2 photoinitiated unimolecular decay rates are reported for jet-cooled samples in the vicinity of the dissociation threshold. The molecules are excited by 385-400 nm tunable subpicosecond pulses to the 2B 2 electronic state which is very strongly mixed with the 2A 1 ground electronic state. Subsequent decomposition is probed by a 226 nm subpicosecond pulse which excites LIF in the NO product. When changing the amount of energy in excess of the dissociation threshold, a step-like increase of the reaction rate versus energy is observed.
Changing Safety Culture, One Step at a Time: The Value of the DOE-VPP Program at PNNL
Energy Technology Data Exchange (ETDEWEB)
Wright, Patrick A.; Isern, Nancy G.
2005-02-01
The primary value of the Pacific Northwest National Laboratory (PNNL) Voluntary Protection Program (VPP) is the ongoing partnership between management and staff committed to change Laboratory safety culture one step at a time. VPP enables PNNL's safety and health program to transcend a top-down, by-the-book approach to safety, and it also raises grassroots safety consciousness by promoting a commitment to safety and health 24 hours a day, 7 days a week. PNNL VPP is a dynamic, evolving program that fosters innovative approaches to continuous improvement in safety and health performance at the Laboratory.
A stepped-care model of post-disaster child and adolescent mental health service provision
Directory of Open Access Journals (Sweden)
Brett M. McDermott
2014-07-01
Full Text Available Background: From a global perspective, natural disasters are common events. Published research highlights that a significant minority of exposed children and adolescents develop disaster-related mental health syndromes and associated functional impairment. Consistent with the considerable unmet need of children and adolescents with regard to psychopathology, there is strong evidence that many children and adolescents with post-disaster mental health presentations are not receiving adequate interventions. Objective: To critique existing child and adolescent mental health services (CAMHS models of care and the capacity of such models to deal with any post-disaster surge in clinical demand. Further, to detail an innovative service response; a child and adolescent stepped-care service provision model. Method: A narrative review of traditional CAMHS is presented. Important elements of a disaster response – individual versus community recovery, public health approaches, capacity for promotion and prevention and service reach are discussed and compared with the CAMHS approach. Results: Difficulties with traditional models of care are highlighted across all levels of intervention; from the ability to provide preventative initiatives to the capacity to provide intense specialised posttraumatic stress disorder interventions. In response, our over-arching stepped-care model is advocated. The general response is discussed and details of the three tiers of the model are provided: Tier 1 communication strategy, Tier 2 parent effectiveness and teacher training, and Tier 3 screening linked to trauma-focused cognitive behavioural therapy. Conclusion: In this paper, we argue that traditional CAMHS are not an appropriate model of care to meet the clinical needs of this group in the post-disaster setting. We conclude with suggestions how improved post-disaster child and adolescent mental health outcomes can be achieved by applying an innovative service approach.
A Two-Step Model for Gamma-Ray Bursts Associated with Supernovae
Cheng, K S
1999-01-01
We here propose a two-step model for gamma-ray bursts (GRBs) associated with supernovae. In the first step, the core collapse of a star with mass $\\ge 19M_\\odot$ leads to a massive neutron star and a normal supernova, and subsequently hypercritical accretion of the neutron star from the supernova ejecta may give rise to a jet through neutrino annihilation along the stellar rotation axis. However, because of too much surrounding matter, this jet rapidly enters a nonrelativistic phase and evolves to a large bubble. In the second step, the neutron star promptly implodes to a rapidly rotating black hole surrounded by a torus when the mass of the star increases to the maximum mass and meanwhile its rotation frequency increases to the upper limit due to the accreted angular momentum. The gravitational binding energy of the torus may be dissipated by a magnetized relativistic wind, which may then be absorbed by the supernova ejecta, thus producing an energetic hypernova. The rotational energy of the black hole may b...
Energy Technology Data Exchange (ETDEWEB)
Mather, Barry
2017-08-24
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.
A double-step truncation procedure for large-scale shell-model calculations
Coraggio, L; Itaco, N
2016-01-01
We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model hamiltonian, so to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform an unitary transformation of the original hamiltonian from its model space into the truncated one. This transformation generates a new shell-model hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model hamiltonian defined in a large model space, set up by seven and five proton and neutron single-particle orb...
Nonlinear time series modelling: an introduction
Simon M. Potter
1999-01-01
Recent developments in nonlinear time series modelling are reviewed. Three main types of nonlinear models are discussed: Markov Switching, Threshold Autoregression and Smooth Transition Autoregression. Classical and Bayesian estimation techniques are described for each model. Parametric tests for nonlinearity are reviewed with examples from the three types of models. Finally, forecasting and impulse response analysis is developed.
A mechanical model for the role of the neck linker during kinesin stepping and gating
Wang, HaiYan; He, ChenJuan
2011-12-01
In this paper, considering the different elastic properties in the attached head and the free head, we propose a physical model, in which the free head undergoes a diffusive search in an entropic spring potential formed by undocking the neck linker, and there are asymmetric conformational changes in the attached head formed by docking the neck linker to support the load force and bias the diffusive search to the forward direction. By performing the thermodynamic analysis, we obtain the free energy difference between forward and backward binding sites. And using the Fokker-Planck equation with two absorbing boundaries, we obtain the dependence of the ratio of forward to backward steps on the backward force. Also, within the Michaelis-Menten model, we investigate the dependence of the velocity-load relationship on the effective length of the junction between the two heads. The results show that our model can provide a physical understanding for the processive movement of kinesin.
A simple one-step chemistry model for partially premixed hydrocarbon combustion
Energy Technology Data Exchange (ETDEWEB)
Fernandez-Tarrazo, Eduardo [Instituto Nacional de Tecnica Aeroespacial, Madrid (Spain); Sanchez, Antonio L. [Area de Mecanica de Fluidos, Universidad Carlos III de Madrid, Leganes 28911 (Spain); Linan, Amable [ETSI Aeronauticos, Pl. Cardenal Cisneros 3, Madrid 28040 (Spain); Williams, Forman A. [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093-0411 (United States)
2006-10-15
This work explores the applicability of one-step irreversible Arrhenius kinetics with unity reaction order to the numerical description of partially premixed hydrocarbon combustion. Computations of planar premixed flames are used in the selection of the three model parameters: the heat of reaction q, the activation temperature T{sub a}, and the preexponential factor B. It is seen that changes in q with equivalence ratio f need to be introduced in fuel-rich combustion to describe the effect of partial fuel oxidation on the amount of heat released, leading to a universal linear variation q(f) for f>1 for all hydrocarbons. The model also employs a variable activation temperature T{sub a}(f) to mimic changes in the underlying chemistry in rich and very lean flames. The resulting chemistry description is able to reproduce propagation velocities of diluted and undiluted flames accurately over the whole flammability limit. Furthermore, computations of methane-air counterflow diffusion flames are used to test the proposed chemistry under nonpremixed conditions. The model not only predicts the critical strain rate at extinction accurately but also gives near-extinction flames with oxygen leakage, thereby overcoming known predictive limitations of one-step Arrhenius kinetics. (author)
Detecting Location Shifts during Model Selection by Step-Indicator Saturation
Directory of Open Access Journals (Sweden)
Jennifer L. Castle
2015-04-01
Full Text Available To capture location shifts in the context of model selection, we propose selecting significant step indicators from a saturating set added to the union of all of the candidate variables. The null retention frequency and approximate non-centrality of a selection test are derived using a ‘split-half’ analysis, the simplest specialization of a multiple-path block-search algorithm. Monte Carlo simulations, extended to sequential reduction, confirm the accuracy of nominal significance levels under the null and show retentions when location shifts occur, improving the non-null retention frequency compared to the corresponding impulse-indicator saturation (IIS-based method and the lasso.
Institute of Scientific and Technical Information of China (English)
TENG Hong-Hui; JIANG Zong-Lin
2011-01-01
@@ One-dimensional detonation waves are simulated with the three-step chain branching reaction model, and the instability criterion is studied.The ratio of the induction zone length and the reaction zone length may be used to decide the instability, and the detonation becomes unstable with the high ratio.However, the ratio is not invariable with different heat release values.The critical ratio, corresponding to the transition from the stable detonation to the unstable detonation, has a negative correlation with the heat release.An empirical relation of the Chapman-Jouguet Mach number and the length ratio is proposed as the instability criterion.
Ultrasonic inspection of rocket fuel model using laminated transducer and multi-channel step pulser
Mihara, T.; Hamajima, T.; Tashiro, H.; Sato, A.
2013-01-01
For the ultrasonic inspection for the packing of solid fuel in a rocket booster, an industrial inspection is difficult. Because the signal to noise ratio in ultrasonic inspection of rocket fuel become worse due to the large attenuation even using lower frequency ultrasound. For the improvement of this problem, we tried to applied the two techniques in ultrasonic inspection, one was the step function pulser system with the super wideband frequency properties and the other was the laminated element transducer. By combining these two techniques, we developed the new ultrasonic measurement system and demonstrated the advantages in ultrasonic inspection of rocket fuel model specimen.
A Simple Fuzzy Time Series Forecasting Model
DEFF Research Database (Denmark)
Ortiz-Arroyo, Daniel
2016-01-01
In this paper we describe a new ﬁrst order fuzzy time series forecasting model. We show that our automatic fuzzy partitioning method provides an accurate approximation to the time series that when combined with rule forecasting and an OWA operator improves forecasting accuracy. Our model does...... not attempt to provide the best results in comparison with other forecasting methods but to show how to improve ﬁrst order models using simple techniques. However, we show that our ﬁrst order model is still capable of outperforming some more complex higher order fuzzy time series models....
Time series modeling, computation, and inference
Prado, Raquel
2010-01-01
The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
Effect of different air-drying time on the microleakage of single-step self-etch adhesives
Directory of Open Access Journals (Sweden)
Horieh Moosavi
2013-05-01
Full Text Available Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO, Clearfil S3 Bond (CSB, Bond Force (BF. Each main group divided into three subgroups regarding the air-drying time: without application of air stream, following the manufacturer's instruction, for 10 sec more than manufacturer's instruction. After completion of restorations, specimens were thermocycled and then connected to a fluid filtration system to evaluate microleakage. The data were statistically analyzed using two-way ANOVA and Tukey-test (α = 0.05. Results The microleakage of all adhesives decreased when the air-drying time increased from 0 sec to manufacturer's instruction (p < 0.001. The microleakage of BF reached its lowest values after increasing the drying time to 10 sec more than the manufacturer's instruction (p < 0.001. Microleakage of OBAO and CSB was significantly lower compared to BF in all three drying time (p < 0.001. Conclusions Increasing in air-drying time of adhesive layer in one-step self-etch adhesives caused reduction of microleakage, but the amount of this reduction may be dependent on the adhesive components of self-etch adhesives.
Clementi, Emanuela; Oddo, Paolo; Drudi, Massimiliano; Pinardi, Nadia; Korres, Gerasimos; Grandi, Alessandro
2017-07-01
This work describes the first step towards a fully coupled modelling system composed of an ocean circulation and a wind wave model. Sensitivity experiments are presented for the Mediterranean Sea where the hydrodynamic model NEMO is coupled with the third-generation wave model WaveWatchIII (WW3). Both models are implemented at 1/16° horizontal resolution and are forced by ECMWF 1/4° horizontal resolution atmospheric fields. The models are two-way coupled at hourly intervals exchanging the following fields: sea surface currents and temperature are transferred from NEMO to WW3 by modifying the mean momentum transfer of waves and the wind speed stability parameter, respectively. The neutral drag coefficient computed by WW3 is then passed to NEMO, which computes the surface stress. Five-year (2009-2013) numerical experiments were carried out in both uncoupled and coupled mode. In order to validate the modelling system, numerical results were compared with coastal and drifting buoys and remote sensing data. The results show that the coupling of currents with waves improves the representation of the wave spectrum. However, the wave-induced drag coefficient shows only minor improvements in NEMO circulation fields, such as temperature, salinity, and currents.
Clementi, Emanuela; Oddo, Paolo; Drudi, Massimiliano; Pinardi, Nadia; Korres, Gerasimos; Grandi, Alessandro
2017-10-01
This work describes the first step towards a fully coupled modelling system composed of an ocean circulation and a wind wave model. Sensitivity experiments are presented for the Mediterranean Sea where the hydrodynamic model NEMO is coupled with the third-generation wave model WaveWatchIII (WW3). Both models are implemented at 1/16° horizontal resolution and are forced by ECMWF 1/4° horizontal resolution atmospheric fields. The models are two-way coupled at hourly intervals exchanging the following fields: sea surface currents and temperature are transferred from NEMO to WW3 by modifying the mean momentum transfer of waves and the wind speed stability parameter, respectively. The neutral drag coefficient computed by WW3 is then passed to NEMO, which computes the surface stress. Five-year (2009-2013) numerical experiments were carried out in both uncoupled and coupled mode. In order to validate the modelling system, numerical results were compared with coastal and drifting buoys and remote sensing data. The results show that the coupling of currents with waves improves the representation of the wave spectrum. However, the wave-induced drag coefficient shows only minor improvements in NEMO circulation fields, such as temperature, salinity, and currents.
Delivery Time Reliability Model of Logistics Network
Liusan Wu; Qingmei Tan; Yuehui Zhang
2013-01-01
Natural disasters like earthquake and flood will surely destroy the existing traffic network, usually accompanied by delivery delay or even network collapse. A logistics-network-related delivery time reliability model defined by a shortest-time entropy is proposed as a means to estimate the actual delivery time reliability. The less the entropy is, the stronger the delivery time reliability remains, and vice versa. The shortest delivery time is computed separately based on two different assum...
Continuous-Time Modeling with Spatial Dependence
Oud, J.H.L.; Folmer, H.; Patuelli, R.; Nijkamp, P.
2012-01-01
(Spatial) panel data are routinely modeled in discrete time (DT). However, compelling arguments exist for continuous-time (CT) modeling of (spatial) panel data. Particularly, most social processes evolve in CT, so that statistical analysis in DT is an oversimplification, gives an incomplete
Continuous-Time Modeling with Spatial Dependence
Oud, J.; Folmer, H.; Patuelli, R.; Nijkamp, P.
(Spatial) panel data are routinely modeled in discrete time (DT). However, compelling arguments exist for continuous-time (CT) modeling of (spatial) panel data. Particularly, most social processes evolve in CT, so that statistical analysis in DT is an oversimplification, gives an incomplete
Modeling of Step-up Grid-Connected Photovoltaic Systems for Control Purposes
Directory of Open Access Journals (Sweden)
Daniel Gonzalez
2012-06-01
Full Text Available This paper presents modeling approaches for step-up grid-connected photovoltaic systems intended to provide analytical tools for control design. The first approach is based on a voltage source representation of the bulk capacitor interacting with the grid-connected inverter, which is a common model for large DC buses and closed-loop inverters. The second approach considers the inverter of a double-stage PV system as a Norton equivalent, which is widely accepted for open-loop inverters. In addition, the paper considers both ideal and realistic models for the DC/DC converter that interacts with the PV module, providing four mathematical models to cover a wide range of applications. The models are expressed in state space representation to simplify its use in analysis and control design, and also to be easily implemented in simulation software, e.g., Matlab. The PV system was analyzed to demonstrate the non-minimum phase condition for all the models, which is an important aspect to select the control technique. Moreover, the system observability and controllability were studied to define design criteria. Finally, the analytical results are illustrated by means of detailed simulations, and the paper results are validated in an experimental test bench.
Pola, Giordano; Di Benedetto, Maria Domenica
2010-01-01
Time-delay systems are an important class of dynamical systems that provide a solid mathematical framework to deal with many application domains of interest. In this paper we focus on nonlinear control systems with unknown and time-varying delay signals and we propose one approach to the control design of such systems, which is based on the construction of symbolic models. Symbolic models are abstract descriptions of dynamical systems in which one symbolic state and one symbolic input correspond to an aggregate of states and an aggregate of inputs. We first introduce the notion of incremental input-delay-to-state stability and characterize it by means of Liapunov-Krasovskii functionals. We then derive sufficient conditions for the existence of symbolic models that are shown to be alternating approximately bisimilar to the original system. Further results are also derived which prove the computability of the proposed symbolic models in a finite number of steps.
Bayesian inference for pulsar timing models
Vigeland, Sarah J
2013-01-01
The extremely regular, periodic radio emission from millisecond pulsars make them useful tools for studying neutron star astrophysics, general relativity, and low-frequency gravitational waves. These studies require that the observed pulse time of arrivals are fit to complicated timing models that describe numerous effects such as the astrometry of the source, the evolution of the pulsar's spin, the presence of a binary companion, and the propagation of the pulses through the interstellar medium. In this paper, we discuss the benefits of using Bayesian inference to obtain these timing solutions. These include the validation of linearized least-squares model fits when they are correct, and the proper characterization of parameter uncertainties when they are not; the incorporation of prior parameter information and of models of correlated noise; and the Bayesian comparison of alternative timing models. We describe our computational setup, which combines the timing models of tempo2 with the nested-sampling integ...
Mixed continuous/discrete time modelling with exact time adjustments
Rovers, K.C.; Kuper, Jan; van de Burgwal, M.D.; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria
2011-01-01
Many systems interact with their physical environment. Design of such systems need a modelling and simulation tool which can deal with both the continuous and discrete aspects. However, most current tools are not adequately able to do so, as they implement both continuous and discrete time signals
Tripoli, Gregory J.; Smith, Eric A.
2014-06-01
A Variable-Step Topography (VST) surface coordinate system is introduced into a dynamically constrained, scalable, nonhydrostatic atmospheric model for reliable simulations of flows over both smooth and steep terrain without sacrificing dynamical integrity over either type of surface. Backgrounds of both terrain-following and step coordinate model developments are presented before justifying the turn to a VST approach within an appropriately configured host model. In this first part of a two-part sequence of papers, the full formulation of the VST model, prefaced by a description of the framework of its apposite host, i.e., a re-tooled Nonhydrostatic Modeling System (NMS), are presented. [The second part assesses the performance and benefits of the new VST coordinate system in conjunction with seven orthodox obstacle flow problems.] The NMS is a 3-dimensional, nonhydrostatic cloud-mesoscale model, designed for integrations from plume-cloud scales out to regional-global scales. The derivative properties of VST in conjunction with the NMS's newly designed dynamically constrained core are capable of accurately capturing the deformations of flows by any type of terrain variability. Numerical differencing schemes needed to satisfy critical integral constraints, while also effectively enabling the VST lower boundary, are described. The host model constraints include mass, momentum, energy, vorticity and enstrophy conservation. A quasi-compressible closure cast on multiple-nest rotated spherical grids is the underlying framework used to study the advantages of the VST coordinate system. The principle objective behind the VST formulation is to combine the advantages of both terrain-following and step coordinate systems without suffering either of their disadvantages, while at the same time creating a vertical surface coordinate setting suitable for a scalable, nonhydrostatic model, safeguarded with physically realistic dynamical constraints.
Discounting Models for Outcomes over Continuous Time
DEFF Research Database (Denmark)
Harvey, Charles M.; Østerdal, Lars Peter
Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...
Selective Maintenance Model Considering Time Uncertainty
Le Chen; Zhengping Shu; Yuan Li; Xuezhi Lv
2012-01-01
This study proposes a selective maintenance model for weapon system during mission interval. First, it gives relevant definitions and operational process of material support system. Then, it introduces current research on selective maintenance modeling. Finally, it establishes numerical model for selecting corrective and preventive maintenance tasks, considering time uncertainty brought by unpredictability of maintenance procedure, indetermination of downtime for spares and difference of skil...
Modelling spatiotemporal olfactory data in two steps: from binary to Hodgkin-Huxley neurones.
Quenet, Brigitte; Dubois, Rémi; Sirapian, Sevan; Dreyfus, Gérard; Horn, David
2002-01-01
Network models of synchronously updated McCulloch-Pitts neurones exhibit complex spatiotemporal patterns that are similar to activities of biological neurones in phase with a periodic local field potential, such as those observed experimentally by Wehr and Laurent (1996, Nature 384, 162-166) in the locust olfactory pathway. Modelling biological neural nets with networks of simple formal units makes the dynamics of the model analytically tractable. It is thus possible to determine the constraints that must be satisfied by its connection matrix in order to make its neurones exhibit a given sequence of activity (see, for instance, Quenet et al., 2001, Neurocomputing 38-40, 831-836). In the present paper, we address the following question: how can one construct a formal network of Hodgkin-Huxley (HH) type neurones that reproduces experimentally observed neuronal codes? A two-step strategy is suggested in the present paper: first, a simple network of binary units is designed, whose activity reproduces the binary experimental codes; second, this model is used as a guide to design a network of more realistic formal HH neurones. We show that such a strategy is indeed fruitful: it allowed us to design a model that reproduces the Wehr-Laurent olfactory codes, and to investigate the robustness of these codes to synaptic noise.
Getty, Stephanie; Brickerhoff, William; Cornish, Timothy; Ecelberger, Scott; Floyd, Melissa
2012-01-01
RATIONALE A miniature time-of-flight mass spectrometer has been adapted to demonstrate two-step laser desorption-ionization (LOI) in a compact instrument package for enhanced organics detection. Two-step LDI decouples the desorption and ionization processes, relative to traditional laser ionization-desorption, in order to produce low-fragmentation conditions for complex organic analytes. Tuning UV ionization laser energy allowed control ofthe degree of fragmentation, which may enable better identification of constituent species. METHODS A reflectron time-of-flight mass spectrometer prototype measuring 20 cm in length was adapted to a two-laser configuration, with IR (1064 nm) desorption followed by UV (266 nm) postionization. A relatively low ion extraction voltage of 5 kV was applied at the sample inlet. Instrument capabilities and performance were demonstrated with analysis of a model polycyclic aromatic hydrocarbon, representing a class of compounds important to the fields of Earth and planetary science. RESULTS L2MS analysis of a model PAH standard, pyrene, has been demonstrated, including parent mass identification and the onset o(tunable fragmentation as a function of ionizing laser energy. Mass resolution m/llm = 380 at full width at half-maximum was achieved which is notable for gas-phase ionization of desorbed neutrals in a highly-compact mass analyzer. CONCLUSIONS Achieving two-step laser mass spectrometry (L2MS) in a highly-miniature instrument enables a powerful approach to the detection and characterization of aromatic organics in remote terrestrial and planetary applications. Tunable detection of parent and fragment ions with high mass resolution, diagnostic of molecular structure, is possible on such a compact L2MS instrument. Selectivity of L2MS against low-mass inorganic salt interferences is a key advantage when working with unprocessed, natural samples, and a mechanism for the observed selectivity is presented.
Survey of time preference, delay discounting models
Directory of Open Access Journals (Sweden)
John R. Doyle
2013-03-01
Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade
Directory of Open Access Journals (Sweden)
Kennedy Curtis E
2011-10-01
Full Text Available Abstract Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1 selecting candidate variables; 2 specifying measurement parameters; 3 defining data format; 4 defining time window duration and resolution; 5 calculating latent variables for candidate variables not directly measured; 6 calculating time series features as latent variables; 7 creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8
Unterweger, K.
2015-01-01
© Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.
Jansz, Paul Vernon; Wild, Graham; Hinckley, Steven
2008-01-01
Conventional time domain Optical Coherence Tomography (OCT) relies on the detection of an interference pattern generated by the interference of backscattered light from the sample and a reference Optical Delay Line (ODL). By referencing the sample interference with the scan depth of the ODL, constructive interference indicates depth in the sample of a reflecting structure. Conventional ODLs used in time domain OCT require some physical movement of a mirror to scan a given depth range. This movement results in instrument degradation. Also in some situations it is necessary to have no moving parts. Stationary ODLs (SODLs) include dual Reflective Spatial Light Modulator (SLM) systems (Type I) and single Transmissive SLM with match-arrayed-waveguide systems (Type II). In this paper, the method of fabrication and characterisation of a number of Stepped Mirrored Structures (SMS) is presented. These structures are intended for later use in proof-of-principle experiments that demonstrate Type II SODL: a six step, 2 mm step depth macro-SMS, an eight step 150 um deep micro-SMS with glue between steps, and a six step 150 um deep micro-SMS with no glue between steps. These SMS are characterized in terms of their fabrication, step alignment and step height increment precision. The degree of alignment of each step was verified using half of a bulk Michelson interferometer. Step height was gauged using a pair of vernier callipers measuring each individual step. A change in notch frequency using an in-fibre Mach-Zhender interferometer was used to gauge the average step height and the result compared to the vernier calliper results. The best aligned SMS was the micro-SMS prepared by method B with no glue between steps. It demonstrated a 95% confidence interval variation of 1% in reflected intensity, with the least variation in intensity within steps. This SMS also had the least absolute variation in step height increment: less than 8 um. Though less variation would be ideal, for
Model Epidemi Sirs Dengan Time Delay
Sinuhaji, Ferdinand
2016-01-01
The epidemic is an outbreak of an infectious disease situation in the population at a place that exceeds the normal approximation in a short period. When the disease is always contained in any place as well as with the causes, it is called endemic. This study discusses decrease SIRS epidemic models with time delay through a mathematical model based on the model of SIRS epidemic (Susceptible, Infective, Recovered, Susceptible). SIRS models used in this study with the assumption ...
Development of a multi-step leukemogenesis model of MLL-rearranged leukemia using humanized mice.
Directory of Open Access Journals (Sweden)
Kunihiko Moriya
Full Text Available Mixed-lineage-leukemia (MLL fusion oncogenes are intimately involved in acute leukemia and secondary therapy-related acute leukemia. To understand MLL-rearranged leukemia, several murine models for this disease have been established. However, the mouse leukemia derived from mouse hematopoietic stem cells (HSCs may not be fully comparable with human leukemia. Here we developed a humanized mouse model for human leukemia by transplanting human cord blood-derived HSCs transduced with an MLL-AF10 oncogene into a supra-immunodeficient mouse strain, NOD/Shi-scid, IL-2Rγ(-/- (NOG mice. Injection of the MLL-AF10-transduced HSCs into the liver of NOG mice enhanced multilineage hematopoiesis, but did not induce leukemia. Because active mutations in ras genes are often found in MLL-related leukemia, we next transduced the gene for a constitutively active form of K-ras along with the MLL-AF10 oncogene. Eight weeks after transplantation, all the recipient mice had developed acute monoblastic leukemia (the M5 phenotype in French-American-British classification. We thus successfully established a human MLL-rearranged leukemia that was derived in vivo from human HSCs. In addition, since the enforced expression of the mutant K-ras alone was insufficient to induce leukemia, the present model may also be a useful experimental platform for the multi-step leukemogenesis model of human leukemia.
Specification of a STEP Based Reference Model for Exchange of Robotics Models
DEFF Research Database (Denmark)
Haenisch, Jochen; Kroszynski, Uri; Ludwig, Arnold
ESPRIT Project 6457: "Interoperability of Standards for Robotics in CIME" (InterRob) belongs to the Subprogram "Computer Integrated Manufacturing and Engineering" of ESPRIT, the European Specific Programme for Research and Development in Information Technology supported by the European Commision....... InterRob aims to develop an integrated solution to precision manufacturing by combining product data and database technologies with robotic off-line programming and simulation. Benefits arise from the use of high level simulation tools and developing standards for the exchange of product model data...... combining geometric, dynamic, process and robot specific data.The growing need for accurate information about manufacturing data (models of robots and other mechanisms) in diverse industrial applications has initiated ESPRIT Project 6457: InterRob. Besides the topics associated with standards for industrial...
Maricelli, Joseph W; Lu, Qi L; Lin, David C; Rodgers, Buel D
2016-01-01
Limb-girdle muscular dystrophy type 2i (LGMD2i) affects thousands of lives with shortened life expectancy mainly due to cardiac and respiratory problems and difficulty with ambulation significantly compromising quality of life. Limited studies have noted impaired gait in patients and animal models of different muscular dystrophies, but not in animal models of LGMD2i. Our goal, therefore, was to quantify gait metrics in the fukutin-related protein P448L mutant (P448L) mouse, a recently developed model for LGMD2i. The Noldus CatWalk XT motion capture system was used to identify multiple gait impairments. An average galloping body speed of 35 cm/s for both P448L and C57BL/6 wild-type mice was maintained to ensure differences in gait were due only to strain physiology. Compared to wild-type mice, P448L mice reach maximum contact 10% faster and have 40% more paw surface area during stance. Additionally, force intensity at the time of maximum paw contact is roughly 2-fold higher in P448L mice. Paw swing time is reduced in P448L mice without changes in stride length as a faster swing speed compensates. Gait instability in P448L mice is indicated by 50% higher instances of 3 and 4 paw stance support and conversely, 2-fold fewer instances of single paw stance support and no instance of zero paw support. This leads to lower variation of normal step patterns used and a higher use of uncommon step patterns. Similar anomalies have also been noted in muscular dystrophy patients due to weakness in the hip abductor muscles, producing a Trendelenburg gait characterized by "waddling" and more pronounced shifts to the stance leg. Thus, gait of P448L mice replicates anomalies commonly seen in LGMD2i patients, which is not only potentially valuable for assessing drug efficacy in restoring movement biomechanics, but also for better understanding them.
The first step of evidence based model: formulation of answerable clinical questions
Directory of Open Access Journals (Sweden)
Mario Delgado-Noguera
2010-12-01
Full Text Available This article seeks to help health professionals about the importance and usefulness of the Evidence-Based Medicine (EBM as a method for making clinical decisions in the practice of medicine. This article focuses on the first step of EBM’s method: ¿How a structured clinical question facilitates the access in the biomedical literature databases such as PubMed and the Cochrane Library? The use of structured questions is useful to save time and helps in retrieving relevant references from scientific literature to answer the question of intervention or treatment. The structured question consists of four components: Patient, Intervention, Comparison, and Outcomes. The structured query terms and the combination thereof is one of the elements of the search strategy, which is also useful for ellaborating the state of the art or a theoretical framework for a research project.
A Three Step B2B Sales Model Based on Satisfaction Judgments
DEFF Research Database (Denmark)
Grünbaum, Niels Nolsøe
2015-01-01
. The insights produces can be applied for selling companies to craft close collaborative customer relationships in a systematic a d efficient way. The process of building customer relationships will be guided through actions that yields higher satisfaction judgments leading to loyal customers and finally......This paper aims to provide a coherent, detailed and integrative understanding of the mental processes (i.e. dimensions) that industrial buyers apply when forming satisfaction judgments in adjacent to new task buying situations. A qualitative inductive research strategy is utilized in this study...... companies’ perspective. The buying center members applied satisfaction dimension when forming satisfaction judgments. Moreover, the focus and importance of the identified satisfaction dimensions fluctuated pending on the phase of the buying process. Based on the findings a three step sales model is proposed...
Competitive inhibition reaction mechanisms for the two-step model of protein aggregation.
Whidden, Mark; Ho, Allison; Ivanova, Magdalena I; Schnell, Santiago
2014-01-01
We propose three new reaction mechanisms for competitive inhibition of protein aggregation for the two-step model of protein aggregation. The first mechanism is characterized by the inhibition of native protein, the second is characterized by the inhibition of aggregation-prone protein and the third mechanism is characterized by the mixed inhibition of native and aggregation-prone proteins. Rate equations are derived for these mechanisms, and a method is described for plotting kinetic results to distinguish these three types of inhibitors. The derived rate equations provide a simple way of estimating the inhibition constant of native or aggregation-prone protein inhibitors in protein aggregation. The new approach is used to estimate the inhibition constants of different peptide inhibitors of insulin aggregation.
Delivery Time Reliability Model of Logistics Network
Directory of Open Access Journals (Sweden)
Liusan Wu
2013-01-01
Full Text Available Natural disasters like earthquake and flood will surely destroy the existing traffic network, usually accompanied by delivery delay or even network collapse. A logistics-network-related delivery time reliability model defined by a shortest-time entropy is proposed as a means to estimate the actual delivery time reliability. The less the entropy is, the stronger the delivery time reliability remains, and vice versa. The shortest delivery time is computed separately based on two different assumptions. If a path is concerned without capacity restriction, the shortest delivery time is positively related to the length of the shortest path, and if a path is concerned with capacity restriction, a minimax programming model is built to figure up the shortest delivery time. Finally, an example is utilized to confirm the validity and practicality of the proposed approach.
Building Chaotic Model From Incomplete Time Series
Siek, Michael; Solomatine, Dimitri
2010-05-01
This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual
Reiter, A; Hake, J; Meissner, C; Rohwer, J; Friedrich, H J; Oehmichen, M
2001-06-15
The elimination time of illicit drugs and their metabolites is of both clinical and forensic interest. In order to determine the elimination time for various drugs and their metabolites we recruited 52 volunteers in a protected, low-step detoxification program. Blood samples were taken from each volunteer for the first 7 days, daily, urine sample for the first 3 weeks, daily. Urine was analyzed using a fluorescence-polarization immunoassay (FPIA) and gas chromatography/mass spectrometry (GC/MS), serum using GC/MS. The elimination times of the drugs and/or their metabolites in urine and serum as well as the tolerance intervals/confidence intervals were determined. Due to the sometimes extremely high initial concentrations and low cut-off values, a few of the volunteers had markedly longer elimination times than those described in the literature. The cut-off values were as follows: barbiturates II (200ng/ml), cannabinoids (20ng/ml), cocaine metabolites (300ng/ml), opiates (200ng/ml). GC/MS detected the following maximum elimination times: total morphine in urine up to 270.3h, total morphine and free morphine in serum up to 121.3h, monoacetylmorphine in urine up to 34.5h, 11-nor-9-carboxy-delta-9-tetrahydrocannabinol (THC-COOH) in urine up to 433.5h, THC-COOH in serum up to 74.3h, total codeine in urine up to 123h, free codeine in urine up to 97.5h, total codeine in serum up to 29h, free codeine in serum up to 6.3h, total dihydrocodeine (DHC) in urine up to 314.8h, free DHC in urine up to 273.3h, total and free DHC in serum up to 50.1h. Cocaine and its metabolites were largely undetectable in the present study.
Leandro, J.; Schumann, A.; Pfister, A.
2016-04-01
Some of the major challenges in modelling rainfall-runoff in urbanised areas are the complex interaction between the sewer system and the overland surface, and the spatial heterogeneity of the urban key features. The former requires the sewer network and the system of surface flow paths to be solved simultaneously. The latter is still an unresolved issue because the heterogeneity of runoff formation requires high detailed information and includes a large variety of feature specific rainfall-runoff dynamics. This paper discloses a methodology for considering the variability of building types and the spatial heterogeneity of land surfaces. The former is achieved by developing a specific conceptual rainfall-runoff model and the latter by defining a fully distributed approach for infiltration processes in urban areas with limited storage capacity dependent on OpenStreetMaps (OSM). The model complexity is increased stepwise by adding components to an existing 2D overland flow model. The different steps are defined as modelling levels. The methodology is applied in a German case study. Results highlight that: (a) spatial heterogeneity of urban features has a medium to high impact on the estimated overland flood-depths, (b) the addition of multiple urban features have a higher cumulative effect due to the dynamic effects simulated by the model, (c) connecting the runoff from buildings to the sewer contributes to the non-linear effects observed on the overland flood-depths, and (d) OSM data is useful in identifying pounding areas (for which infiltration plays a decisive role) and permeable natural surface flow paths (which delay the flood propagation).
Multi-model Cross Pollination in Time
Du, Hailiang
2016-01-01
Predictive skill of complex models is often not uniform in model-state space; in weather forecasting models, for example, the skill of the model can be greater in populated regions of interest than in "remote" regions of the globe. Given a collection of models, a multi-model forecast system using the cross pollination in time approach can be generalised to take advantage of instances where some models produce systematically more accurate forecast of some components of the model-state. This generalisation is stated and then successfully demonstrated in a moderate ~40 dimensional nonlinear dynamical system suggested by Lorenz. In this demonstration four imperfect models, each with similar global forecast skill, are used. Future applications in weather forecasting and in economic forecasting are discussed. The demonstration establishes that cross pollinating forecast trajectories to enrich the collection of simulations upon which the forecast is built can yield a new forecast system with significantly more skill...
Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models
DEFF Research Database (Denmark)
Mikkelsen, Jakob Guldbæk; Hillebrand, Eric; Urga, Giovanni
In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum...... likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite...
Gharamti, M. E.
2015-05-11
The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model
Alcoholics Anonymous and twelve-step recovery: a model based on social and cognitive neuroscience.
Galanter, Marc
2014-01-01
In the course of achieving abstinence from alcohol, longstanding members of Alcoholics Anonymous (AA) typically experience a change in their addiction-related attitudes and behaviors. These changes are reflective of physiologically grounded mechanisms which can be investigated within the disciplines of social and cognitive neuroscience. This article is designed to examine recent findings associated with these disciplines that may shed light on the mechanisms underlying this change. Literature review and hypothesis development. Pertinent aspects of the neural impact of drugs of abuse are summarized. After this, research regarding specific brain sites, elucidated primarily by imaging techniques, is reviewed relative to the following: Mirroring and mentalizing are described in relation to experimentally modeled studies on empathy and mutuality, which may parallel the experiences of social interaction and influence on AA members. Integration and retrieval of memories acquired in a setting like AA are described, and are related to studies on storytelling, models of self-schema development, and value formation. A model for ascription to a Higher Power is presented. The phenomena associated with AA reflect greater complexity than the empirical studies on which this article is based, and certainly require further elucidation. Despite this substantial limitation in currently available findings, there is heuristic value in considering the relationship between the brain-based and clinical phenomena described here. There are opportunities for the study of neuroscientific correlates of Twelve-Step-based recovery, and these can potentially enhance our understanding of related clinical phenomena. © American Academy of Addiction Psychiatry.
Kainerstorfer, Jana M.; Sassaroli, Angelo; Hallacoglu, Bertan; Pierro, Michele L.; Fantini, Sergio
2015-01-01
Rationale and Objectives Perturbations in cerebral blood volume (CBV), blood flow (CBF), and metabolic rate of oxygen (CMRO2) lead to associated changes in tissue concentrations of oxy- and deoxy-hemoglobin (ΔO and ΔD), which can be measured by near-infrared spectroscopy (NIRS). A novel hemodynamic model has been introduced to relate physiological perturbations and measured quantities. We seek to use this model to determine functional traces of cbv(t) and cbf(t) − cmro2(t) from time-varying NIRS data, and cerebrovascular physiological parameters from oscillatory NIRS data (lowercase letters denote the relative changes in CBV, CBF, and CMRO2 with respect to baseline). Such a practical implementation of a quantitative hemodynamic model is an important step toward the clinical translation of NIRS. Materials and Methods In the time domain, we have simulated O(t) and D(t) traces induced by cerebral activation. In the frequency domain, we have performed a new analysis of frequency-resolved measurements of cerebral hemodynamic oscillations during a paced breathing paradigm. Results We have demonstrated that cbv(t) and cbf(t) − cmro2(t) can be reliably obtained from O(t) and D(t) using the model, and that the functional NIRS signals are delayed with respect to cbf(t) − cmro2(t) as a result of the blood transit time in the microvasculature. In the frequency domain, we have identified physiological parameters (e.g., blood transit time, cutoff frequency of autoregulation) that can be measured by frequency-resolved measurements of hemodynamic oscillations. Conclusions The ability to perform noninvasive measurements of cerebrovascular parameters has far-reaching clinical implications. Functional brain studies rely on measurements of CBV, CBF, and CMRO2, whereas the diagnosis and assessment of neurovascular disorders, traumatic brain injury, and stroke would benefit from measurements of local cerebral hemodynamics and autoregulation. PMID:24439332
Horton, Pascal; Obled, Charles; Jaboyedoff, Michel
2017-07-01
Analogue methods (AMs) predict local weather variables (predictands) such as precipitation by means of a statistical relationship with predictors at a synoptic scale. The analogy is generally assessed on gradients of geopotential heights first to sample days with a similar atmospheric circulation. Other predictors such as moisture variables can also be added in a successive level of analogy. The search for candidate situations similar to a given target day is usually undertaken by comparing the state of the atmosphere at fixed hours of the day for both the target day and the candidate analogues. This is a consequence of using standard daily precipitation time series, which are available over longer periods than sub-daily data. However, it is unlikely for the best analogy to occur at the exact same hour for the target and candidate situations. A better analogue situation may be found with a time shift of several hours since a better fit can occur at different times of the day. In order to assess the potential for finding better analogues at a different hour, a moving time window (MTW) has been introduced. The MTW resulted in a better analogy in terms of the atmospheric circulation and showed improved values of the analogy criterion on the entire distribution of the extracted analogue dates. The improvement was found to increase with the analogue rank due to an accumulation of better analogues in the selection. A seasonal effect has also been identified, with larger improvements shown in winter than in summer. This may be attributed to stronger diurnal cycles in summer that favour predictors taken at the same hour for the target and analogue days. The impact of the MTW on the precipitation prediction skill has been assessed by means of a sub-daily precipitation series transformed into moving 24 h totals at 12, 6, and 3 h time steps. The prediction skill was improved by the MTW, as was the reliability of the prediction. Moreover, the improvements were greater for days
Modeling noisy time series Physiological tremor
Timmer, J
1998-01-01
Empirical time series often contain observational noise. We investigate the effect of this noise on the estimated parameters of models fitted to the data. For data of physiological tremor, i.e. a small amplitude oscillation of the outstretched hand of healthy subjects, we compare the results for a linear model that explicitly includes additional observational noise to one that ignores this noise. We discuss problems and possible solutions for nonlinear deterministic as well as nonlinear stochastic processes. Especially we discuss the state space model applicable for modeling noisy stochastic systems and Bock's algorithm capable for modeling noisy deterministic systems.
Modeling discrete time-to-event data
Tutz, Gerhard
2016-01-01
This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...
Estimating High-Dimensional Time Series Models
DEFF Research Database (Denmark)
Medeiros, Marcelo C.; Mendes, Eduardo F.
We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly...
Forecasting with periodic autoregressive time series models
Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)
1999-01-01
textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption
Forecasting with periodic autoregressive time series models
Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)
1999-01-01
textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption
Sasaki, Akira; Kato, Susumu; Takahashii, Eiichi; Kishimoto, Yasuaki; Fujii, Takashi; Kanazawa, Seiji
2016-02-01
We show a cell simulation of a discharge in an insulating gas from the initial partial discharge to leader inception until breakdown, based on the percolation model. In the model, we consider that the propagation of the leader occurs when connections between randomly produced ionized regions in the discharge medium are established. To determine the distribution of ionized regions, the state of each simulation cell is decided by evaluating the probability of ionization in SF6, which depends on the local electric field. The electric field as well as the discharge current are calculated by solving circuit equations for the network of simulation cells. Both calculations are coupled to each other and the temporal evolution of discharge is self-consistently calculated. The model dependence of the features of the discharge is investigated. It is found that taking the suppression of attachment in the presence of a discharge current into account, the calculation reproduces the behavior of experimental discharges. It is shown that for a strong electric field, the inception of a stepped leader causes immediate breakdown. For an electric field of 30-50% of the critical field, the initial partial discharge persists for a stochastic time lag and then the propagation of a leader takes place. As the strength of the electric field decreases, the time lag increases rapidly and eventually only a partial discharge with a short arrested leader occurs, as observed in experiments.
Rödig, C; Chizhov, I; Weidlich, O; Siebert, F
1999-01-01
In this report, from time-resolved step-scan Fourier transform infrared investigations from 15 ns to 160 ms, we provide evidence for the subsequent rise of three different M states that differ in their structures. The first state rises with approximately 3 microseconds to only a small percentage. Its structure as judged from amide I/II bands differs in small but well-defined aspects from the L state. The next M state, which appears in approximately 40 microseconds, has almost all of the characteristics of the "late" M state, i.e., it differs considerably from the first one. Here, the L left arrow over right arrow M equilibrium is shifted toward M, although some percentage of L still persists. In the last M state (rise time approximately 130 microseconds), the equilibrium is shifted toward full deprotonation of the Schiff base, and only small additional structural changes take place. In addition to these results obtained for unbuffered conditions or at pH 7, experiments performed at lower and higher pH are presented. These results are discussed in terms of the molecular changes postulated to occur in the M intermediate to allow the shift of the L/M equilibrium toward M and possibly to regulate the change of the accessibility of the Schiff base necessary for effective proton pumping. PMID:10233083
Flexible boosting of accelerated failure time models
Directory of Open Access Journals (Sweden)
Hothorn Torsten
2008-06-01
Full Text Available Abstract Background When boosting algorithms are used for building survival models from high-dimensional data, it is common to fit a Cox proportional hazards model or to use least squares techniques for fitting semiparametric accelerated failure time models. There are cases, however, where fitting a fully parametric accelerated failure time model is a good alternative to these methods, especially when the proportional hazards assumption is not justified. Boosting algorithms for the estimation of parametric accelerated failure time models have not been developed so far, since these models require the estimation of a model-specific scale parameter which traditional boosting algorithms are not able to deal with. Results We introduce a new boosting algorithm for censored time-to-event data which is suitable for fitting parametric accelerated failure time models. Estimation of the predictor function is carried out simultaneously with the estimation of the scale parameter, so that the negative log likelihood of the survival distribution can be used as a loss function for the boosting algorithm. The estimation of the scale parameter does not affect the favorable properties of boosting with respect to variable selection. Conclusion The analysis of a high-dimensional set of microarray data demonstrates that the new algorithm is able to outperform boosting with the Cox partial likelihood when the proportional hazards assumption is questionable. In low-dimensional settings, i.e., when classical likelihood estimation of a parametric accelerated failure time model is possible, simulations show that the new boosting algorithm closely approximates the estimates obtained from the maximum likelihood method.
Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc
2012-12-01
In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.
Discrete-time modelling of musical instruments
Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
Discrete-time modelling of musical instruments
Energy Technology Data Exchange (ETDEWEB)
Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti [Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology, PO Box 3000, FI-02015 TKK, Espoo (Finland)
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
McKenzie-Veal, Dillon
2012-01-01
This research was conducted to provide greater depth into the ability of STEP AP 203 Edition 2, JT, and 3D PDF to translate and preserve information while using a benchmark model. The benchmark model was designed based on four industry models and created natively in the five industry leading 3D CAD programs. The native CAD program models were translated using STEP, JT, and 3D PDF. Several criteria were analyzed along the paths of translation from one disparate CAD program to another. Along wi...
Alternative time representation in dopamine models.
Rivest, François; Kalaska, John F; Bengio, Yoshua
2010-02-01
Dopaminergic neuron activity has been modeled during learning and appetitive behavior, most commonly using the temporal-difference (TD) algorithm. However, a proper representation of elapsed time and of the exact task is usually required for the model to work. Most models use timing elements such as delay-line representations of time that are not biologically realistic for intervals in the range of seconds. The interval-timing literature provides several alternatives. One of them is that timing could emerge from general network dynamics, instead of coming from a dedicated circuit. Here, we present a general rate-based learning model based on long short-term memory (LSTM) networks that learns a time representation when needed. Using a naïve network learning its environment in conjunction with TD, we reproduce dopamine activity in appetitive trace conditioning with a constant CS-US interval, including probe trials with unexpected delays. The proposed model learns a representation of the environment dynamics in an adaptive biologically plausible framework, without recourse to delay lines or other special-purpose circuits. Instead, the model predicts that the task-dependent representation of time is learned by experience, is encoded in ramp-like changes in single-neuron activity distributed across small neural networks, and reflects a temporal integration mechanism resulting from the inherent dynamics of recurrent loops within the network. The model also reproduces the known finding that trace conditioning is more difficult than delay conditioning and that the learned representation of the task can be highly dependent on the types of trials experienced during training. Finally, it suggests that the phasic dopaminergic signal could facilitate learning in the cortex.
Applying the manifold theory to Milky Way models : First steps on morphology and kinematics
Romero-Gomez, M.; Athanassoula, E.; Antoja Castelltort, Teresa; Figueras, F.; Reylé, C.; Robin, A.; Schultheis, M.
We present recent results obtained by applying invariant manifold techniques to analytical models of the Milky Way. It has been shown that invariant manifolds can reproduce successfully the spiral arms and rings in external barred galaxies. Here, for the first time, we apply this theory to Milky Way
Directory of Open Access Journals (Sweden)
Jian Xu
2014-08-01
Full Text Available STEP (STriatal-Enriched protein tyrosine Phosphatase is a neuron-specific phosphatase that regulates N-methyl-D-aspartate receptor (NMDAR and α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR trafficking, as well as ERK1/2, p38, Fyn, and Pyk2 activity. STEP is overactive in several neuropsychiatric and neurodegenerative disorders, including Alzheimer's disease (AD. The increase in STEP activity likely disrupts synaptic function and contributes to the cognitive deficits in AD. AD mice lacking STEP have restored levels of glutamate receptors on synaptosomal membranes and improved cognitive function, results that suggest STEP as a novel therapeutic target for AD. Here we describe the first large-scale effort to identify and characterize small-molecule STEP inhibitors. We identified the benzopentathiepin 8-(trifluoromethyl-1,2,3,4,5-benzopentathiepin-6-amine hydrochloride (known as TC-2153 as an inhibitor of STEP with an IC50 of 24.6 nM. TC-2153 represents a novel class of PTP inhibitors based upon a cyclic polysulfide pharmacophore that forms a reversible covalent bond with the catalytic cysteine in STEP. In cell-based secondary assays, TC-2153 increased tyrosine phosphorylation of STEP substrates ERK1/2, Pyk2, and GluN2B, and exhibited no toxicity in cortical cultures. Validation and specificity experiments performed in wild-type (WT and STEP knockout (KO cortical cells and in vivo in WT and STEP KO mice suggest specificity of inhibitors towards STEP compared to highly homologous tyrosine phosphatases. Furthermore, TC-2153 improved cognitive function in several cognitive tasks in 6- and 12-mo-old triple transgenic AD (3xTg-AD mice, with no change in beta amyloid and phospho-tau levels.
DEFF Research Database (Denmark)
Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten
2011-01-01
A very fine temporal and volumetric resolution precipitation time series is modeled using Markov models. Both 1st and 2nd order Markov models as well as seasonal and diurnal models are investigated and evaluated using likelihood based techniques. The 2nd order Markov model is found to be insignif...
Saturation and time dependence of geodynamo models
Schrinner, M; Cameron, R; Hoyng, P
2009-01-01
In this study we address the question under which conditions a saturated velocity field stemming from geodynamo simulations leads to an exponential growth of the magnetic field in a corresponding kinematic calculation. We perform global self-consistent geodynamo simulations and calculate the evolution of a kinematically advanced tracer field. The self-consistent velocity field enters the induction equation in each time step, but the tracer field does not contribute to the Lorentz force. This experiment has been performed by Cattaneo & Tobias (2009) and is closely related to the test field method by Schrinner et al. (2005, 2007). We find two dynamo regimes in which the tracer field either grows exponentially or approaches a state aligned with the actual self-consistent magnetic field after an initial transition period. Both regimes can be distinguished by the Rossby number and coincide with the dipolar and multipolar dynamo regimes identified by Christensen & Aubert (2006). Dipolar dynamos with low Ros...
Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper
2015-05-01
In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.
A two-step real-time PCR assay for quantitation and genotyping of human parvovirus 4.
Väisänen, E; Lahtinen, A; Eis-Hübinger, A M; Lappalainen, M; Hedman, K; Söderlund-Venermo, M
2014-01-01
Human parvovirus 4 (PARV4) of the family Parvoviridae was discovered in a plasma sample of a patient with an undiagnosed acute infection in 2005. Currently, three PARV4 genotypes have been identified, however, with an unknown clinical significance. Interestingly, these genotypes seem to differ in epidemiology. In Northern Europe, USA and Asia, genotypes 1 and 2 have been found to occur mainly in persons with a history of injecting drug use or other parenteral exposure. In contrast, genotype 3 appears to be endemic in sub-Saharan Africa, where it infects children and adults without such risk behaviour. In this study, a novel straightforward and cost-efficient molecular assay for both quantitation and genotyping of PARV4 DNA was developed. The two-step method first applies a single-probe pan-PARV4 qPCR for screening and quantitation of this relatively rare virus, and subsequently, only the positive samples undergo a real-time PCR-based multi-probe genotyping. The new qPCR-GT method is highly sensitive and specific regardless of the genotype, and thus being suitable for studying the clinical impact and occurrence of the different PARV4 genotypes. Copyright © 2013 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Daniel Junker
2012-01-01
Full Text Available Objectives. To evaluate prostate cancer (PCa detection rates of real-time elastography (RTE in dependence of tumor size, tumor volume, localization and histological type. Materials and Methods. Thirdy-nine patients with biopsy proven PCa underwent RTE before radical prostatectomy (RPE to assess prostate tissue elasticity, and hard lesions were considered suspicious for PCa. After RPE, the prostates were prepared as whole-mount step sections and were compared with imaging findings for analyzing PCa detection rates. Results. RTE detected 6/62 cancer lesions with a maximum diameter of 0–5 mm (9.7%, 10/37 with a maximum diameter of 6–10 mm (27%, 24/34 with a maximum diameter of 11–20 20 mm (70.6%, 14/14 with a maximum diameter of >20 mm (100% and 40/48 with a volume ≥0.2 cm3 (83.3%. Regarding cancer lesions with a volume ≥ 0.2 cm³ there was a significant difference in PCa detection rates between Gleason scores with predominant Gleason pattern 3 compared to those with predominant Gleason pattern 4 or 5 (75% versus 100%; P=0.028. Conclusions. RTE is able to detect PCa of significant tumor volume and of predominant Gleason pattern 4 or 5 with high confidence, but is of limited value in the detection of small cancer lesions.
Applying the manifold theory to Milky Way models: First steps on morphology and kinematics
Directory of Open Access Journals (Sweden)
Antoja T.
2012-02-01
Full Text Available We present recent results obtained by applying invariant manifold techniques to analytical models of the Milky Way. It has been shown that invariant manifolds can reproduce successfully the spiral arms and rings in external barred galaxies. Here, for the first time, we apply this theory to Milky Way models. We select five different models from the literature and, using the parameters chosen by the authors of the papers, and three different cases, namely Case 1, where only the COBE/DIRBE bar is included in the potential; Case 2, when the COBE/DIRBE and the Long bar are aligned, and Case 3, when the COBE/DIRBE bar and the Long bar are misaligned. We compute in each case and for each model the orbits trapped by the manifolds. In general, the global morphology of the manifolds can account for the 3-kpc arms and for the Galactic Molecular Ring.
DEFF Research Database (Denmark)
Quinonero, Joaquin; Girard, Agathe; Larsen, Jan
2003-01-01
The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus....... The capability of the method is demonstrated for forecasting of time-series and compared to approximate methods.......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to
From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets
K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)
2006-01-01
textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with
From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets
K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)
2006-01-01
textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with
REVERSE ENGINEERING IN MODELING OF AIRCRAFT PROPELLER BLADE - FIRST STEP TO PRODUCT OPTIMIZATION
Directory of Open Access Journals (Sweden)
Muhammad Yasir Anwar
2014-12-01
Full Text Available ABSTRACT: Propeller aircrafts have had many ups and downs throughout their use in the aviation history. Due to the current economic recession and price hikes in fuels, propeller aircrafts may yet again be a choice for aerial transport and has thus re-emerged as an active area for research. On modern propeller aircrafts old aluminum propellers are being replaced with fiber reinforced composite propellers. However, owing to their reliability, strength, and integrity, aluminum propellers are still used in military aircrafts. One of the challenges that engineers of these aircraft-type have had to deal with is the non-availability of engineering drawings of these propellers. It is practically impossible to carry out any study, research or modification on such propellers in the absence of correct CAD data. This article proposes a methodology wherein a CAD model of a C-130 aircraft propeller blade can be constructed using reverse engineering techniques. Such a model would help in future aerodynamic as well as structural analyses which includes investigation on structural integrity and the fluid dynamics characteristics of propeller blades. Different steps involved in this process are discussed; starting from laser scanning to obtain the cloud of points data and subsequently generating a CAD model in a commercial CAD software. The model is then imported into an analysis software where quality surface meshes are generated using tetrahedral elements. The purpose is to prepare a meshed model for future computational analysis including CFD (Computational Fluid Dynamics and FE (Finite Element analysis. ABSTRAK: Pesawat bebaling mempunyai tempoh pasang surutnya sepanjang penggunaanya dalam sejarah penerbangan. Kini disebabkan oleh kemelesetan ekonomi dan kenaikan harga minyak, pesawat bebaling mungkin akan merupakan pengangkutan udara pilihan dan seterusnya muncul semula sebagai ruangan aktif penyelidikan. Pada pesawat bebaling moden, bebaling aluminium yang
Multi-Wavelength Laser Transmitter for the Two-Step Laser Time-of-Flight Mass Spectrometer
Yu, Anthony W.; Li, Steven X.; Fahey, Molly E.; Grubisic, Andrej; Farcy, Benjamin J.; Uckert, Kyle; Li, Xiang; Getty, Stephanie
2017-01-01
Missions to diverse Outer Solar System bodies will require investigations that can detect a wide range of organics in complex mixtures, determine the structure of selected molecules, and provide powerful insights into their origin and evolution. Previous studies from remote spectroscopy of the Outer Solar System showed a diverse population of macromolecular species that are likely to include aromatic and conjugated hydrocarbons with varying degrees of methylation and nitrile incorporation. In situ exploration of Titan's upper atmosphere via mass and plasma spectrometry has revealed a complex mixture of organics. Similar material is expected on the Ice Giants, their moons, and other Outer Solar System bodies, where it may subsequently be deposited onto surface ices. It is evident that the detection of organics on other planetary surfaces provides insight into the chemical and geological evolution of a Solar System body of interest and can inform our understanding of its potential habitability. We have developed a prototype two-step laser desorption/ionization time-of-flight mass spectrometer (L2MS) instrument by exploiting the resonance-enhanced desorption of analyte. We have successfully demonstrated the ability of the L2MS to detect hydrocarbons in organically-doped analog minerals, including cryogenic Ocean World-relevant ices and mixtures. The L2MS instrument operates by generating a neutral plume of desorbed analyte with an IR desorption laser pulse, followed at a delay by a ultraviolet (UV) laser pulse, ionizing the plume. Desorption of the analyte, including trace organic species, may be enhanced by selecting the wavelength of the IR desorption laser to coincide with IR absorption features associated with vibration transitions of minerals or organic functional groups. In this effort, a preliminary laser developed for the instrument uses a breadboard mid-infrared (MIR) desorption laser operating at a discrete 3.475 µm wavelength, and a breadboard UV
Time-varying modeling of cerebral hemodynamics.
Marmarelis, Vasilis Z; Shin, Dae C; Orme, Melissa; Rong Zhang
2014-03-01
The scientific and clinical importance of cerebral hemodynamics has generated considerable interest in their quantitative understanding via computational modeling. In particular, two aspects of cerebral hemodynamics, cerebral flow autoregulation (CFA) and CO2 vasomotor reactivity (CVR), have attracted much attention because they are implicated in many important clinical conditions and pathologies (orthostatic intolerance, syncope, hypertension, stroke, vascular dementia, mild cognitive impairment, Alzheimer's disease, and other neurodegenerative diseases with cerebrovascular components). Both CFA and CVR are dynamic physiological processes by which cerebral blood flow is regulated in response to fluctuations in cerebral perfusion pressure and blood CO2 tension. Several modeling studies to date have analyzed beat-to-beat hemodynamic data in order to advance our quantitative understanding of CFA-CVR dynamics. A confounding factor in these studies is the fact that the dynamics of the CFA-CVR processes appear to vary with time (i.e., changes in cerebrovascular characteristics) due to neural, endocrine, and metabolic effects. This paper seeks to address this issue by tracking the changes in linear time-invariant models obtained from short successive segments of data from ten healthy human subjects. The results suggest that systemic variations exist but have stationary statistics and, therefore, the use of time-invariant modeling yields "time-averaged models" of physiological and clinical utility.
Modeling of the time sharing for lecturers
Directory of Open Access Journals (Sweden)
E. Yu. Shakhova
2017-01-01
Full Text Available In the context of modernization of the Russian system of higher education, it is necessary to analyze the working time of the university lecturers, taking into account both basic job functions as the university lecturer, and others.The mathematical problem is presented for the optimal working time planning for the university lecturers. The review of the documents, native and foreign works on the study is made. Simulation conditions, based on analysis of the subject area, are defined. Models of optimal working time sharing of the university lecturers («the second half of the day» are developed and implemented in the system MathCAD. Optimal solutions have been obtained.Three problems have been solved:1 to find the optimal time sharing for «the second half of the day» in a certain position of the university lecturer;2 to find the optimal time sharing for «the second half of the day» for all positions of the university lecturers in view of the established model of the academic load differentiation;3 to find the volume value of the non-standardized part of time work in the department for the academic year, taking into account: the established model of an academic load differentiation, distribution of the Faculty number for the positions and the optimal time sharing «the second half of the day» for the university lecturers of the department.Examples are given of the analysis results. The practical application of the research: the developed models can be used when planning the working time of an individual professor in the preparation of the work plan of the university department for the academic year, as well as to conduct a comprehensive analysis of the administrative decisions in the development of local university regulations.
Howe, A E; Whitley, L D; 10.1613/jair.1576
2011-01-01
Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearest optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillards algorithm can be modeled with ...
The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input
Senner, Jill E.; Baud, Matthew R.
2017-01-01
An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…
Human airway organoid engineering as a step toward lung regeneration and disease modeling.
Tan, Qi; Choi, Kyoung Moo; Sicard, Delphine; Tschumperlin, Daniel J
2017-01-01
Organoids represent both a potentially powerful tool for the study cell-cell interactions within tissue-like environments, and a platform for tissue regenerative approaches. The development of lung tissue-like organoids from human adult-derived cells has not previously been reported. Here we combined human adult primary bronchial epithelial cells, lung fibroblasts, and lung microvascular endothelial cells in supportive 3D culture conditions to generate airway organoids. We demonstrate that randomly-seeded mixed cell populations undergo rapid condensation and self-organization into discrete epithelial and endothelial structures that are mechanically robust and stable during long term culture. After condensation airway organoids generate invasive multicellular tubular structures that recapitulate limited aspects of branching morphogenesis, and require actomyosin-mediated force generation and YAP/TAZ activation. Despite the proximal source of primary epithelium used in the airway organoids, discrete areas of both proximal and distal epithelial markers were observed over time in culture, demonstrating remarkable epithelial plasticity within the context of organoid cultures. Airway organoids also exhibited complex multicellular responses to a prototypical fibrogenic stimulus (TGF-β1) in culture, and limited capacity to undergo continued maturation and engraftment after ectopic implantation under the murine kidney capsule. These results demonstrate that the airway organoid system developed here represents a novel tool for the study of disease-relevant cell-cell interactions, and establishes this platform as a first step toward cell-based therapy for chronic lung diseases based on de novo engineering of implantable airway tissues.
Multi-Step Usage of in Vivo Models During Rational Drug Design and Discovery
Directory of Open Access Journals (Sweden)
Charles H. Williams
2011-04-01
Full Text Available In this article we propose a systematic development method for rational drug design while reviewing paradigms in industry, emerging techniques and technologies in the field. Although the process of drug development today has been accelerated by emergence of computational methodologies, it is a herculean challenge requiring exorbitant resources; and often fails to yield clinically viable results. The current paradigm of target based drug design is often misguided and tends to yield compounds that have poor absorption, distribution, metabolism, and excretion, toxicology (ADMET properties. Therefore, an in vivo organism based approach allowing for a multidisciplinary inquiry into potent and selective molecules is an excellent place to begin rational drug design. We will review how organisms like the zebrafish and Caenorhabditis elegans can not only be starting points, but can be used at various steps of the drug development process from target identification to pre-clinical trial models. This systems biology based approach paired with the power of computational biology; genetics and developmental biology provide a methodological framework to avoid the pitfalls of traditional target based drug design.
Multi-step usage of in vivo models during rational drug design and discovery.
Williams, Charles H; Hong, Charles C
2011-01-01
In this article we propose a systematic development method for rational drug design while reviewing paradigms in industry, emerging techniques and technologies in the field. Although the process of drug development today has been accelerated by emergence of computational methodologies, it is a herculean challenge requiring exorbitant resources; and often fails to yield clinically viable results. The current paradigm of target based drug design is often misguided and tends to yield compounds that have poor absorption, distribution, metabolism, and excretion, toxicology (ADMET) properties. Therefore, an in vivo organism based approach allowing for a multidisciplinary inquiry into potent and selective molecules is an excellent place to begin rational drug design. We will review how organisms like the zebrafish and Caenorhabditis elegans can not only be starting points, but can be used at various steps of the drug development process from target identification to pre-clinical trial models. This systems biology based approach paired with the power of computational biology; genetics and developmental biology provide a methodological framework to avoid the pitfalls of traditional target based drug design.
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing
2017-05-01
We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the
Instability of a two-step Rankine vortex in a reduced gravity QG model
Perrot, Xavier; Carton, Xavier
2014-01-01
We investigate the stability of a steplike Rankine vortex in a one-active-layer, reduced gravity, quasi-geostrophic model. After calculating the linear stability with a normal mode analysis, the singular modes are determined as a function of the vortex shape to investigate short-time stability. Finally we determine the position of the critical layer and show its influence when it lies inside the vortex.
Instability of a two-step Rankine vortex in a reduced gravity QG model
Energy Technology Data Exchange (ETDEWEB)
Perrot, Xavier [Laboratoire de Météorologie Dynamique, Ecole Normale Supérieure, 24 rue Lhomond, F-75005 Paris (France); Carton, Xavier, E-mail: xperrot@lmd.ens.fr, E-mail: xcarton@univ-brest.fr [Laboratoire de Physique des Océans, Université de Bretagne Occidentale, 6 avenue Le Gorgeu, F-29200 Brest (France)
2014-06-01
We investigate the stability of a steplike Rankine vortex in a one-active-layer, reduced gravity, quasi-geostrophic model. After calculating the linear stability with a normal mode analysis, the singular modes are determined as a function of the vortex shape to investigate short-time stability. Finally we determine the position of the critical layer and show its influence when it lies inside the vortex. (papers)
Anderson, Daren R; Zlateva, Ianita; Coman, Emil N; Khatri, Khushbu; Tian, Terrence; Kerns, Robert D
2016-01-01
Purpose Treating pain in primary care is challenging. Primary care providers (PCPs) receive limited training in pain care and express low confidence in their knowledge and ability to manage pain effectively. Models to improve pain outcomes have been developed, but not formally implemented in safety net practices where pain is particularly common. This study evaluated the impact of implementing the Stepped Care Model for Pain Management (SCM-PM) at a large, multisite Federally Qualified Health Center. Methods The Promoting Action on Research Implementation in Health Services framework guided the implementation of the SCM-PM. The multicomponent intervention included: education on pain care, new protocols for pain assessment and management, implementation of an opioid management dashboard, telehealth consultations, and enhanced onsite specialty resources. Participants included 25 PCPs and their patients with chronic pain (3,357 preintervention and 4,385 postintervention) cared for at Community Health Center, Inc. Data were collected from the electronic health record and supplemented by chart reviews. Surveys were administered to PCPs to assess knowledge, attitudes, and confidence. Results Providers’ pain knowledge scores increased to an average of 11% from baseline; self-rated confidence in ability to manage pain also increased. Use of opioid treatment agreements and urine drug screens increased significantly by 27.3% and 22.6%, respectively. Significant improvements were also noted in documentation of pain, pain treatment, and pain follow-up. Referrals to behavioral health providers for patients with pain increased by 5.96% (P=0.009). There was no significant change in opioid prescribing. Conclusion Implementation of the SCM-PM resulted in clinically significant improvements in several quality of pain care outcomes. These findings, if sustained, may translate into improved patient outcomes. PMID:27881926
Forecasting with nonlinear time series models
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Teräsvirta, Timo
and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...
Energy Technology Data Exchange (ETDEWEB)
Ikeya, Tomohiko; Mita, Yuichi; Ishihara, Kaoru [Central Research Inst. of Electric Power Industry, Tokyo (Japan); Sawada, Nobuyuki [Hokkaido Electric Power, Sapporo (Japan); Takagi, Sakae; Murakami, Jun-ichi [Tohoku Electric Power, Sendai (Japan); Kobayashi, Kazuyuki [Tokyo Electric Power, Yokohama (Japan); Sakabe, Tetsuya [Chubu Electric Power, Nagoya (Japan); Kousaka, Eiichi [Hokuriku Electric Power, Toyama (Japan); Yoshioka, Haruki [The Kansai Electric Power, Osaka (Japan); Kato, Satoru [The Chugoku Electric Power, Hiroshima (Japan); Yamashita, Masanori [Shikoku Research Inst., Takamatsu (Japan); Narisoko, Hayato [The Okinawa Electric Power, Naha (Japan); Nishiyama, Kazuo [The Central Electric Power Council, Tokyo (Japan); Adachi, Kazuyuki [Kyushu Electric Power, Fukuoka (Japan)
1998-09-01
For the popularization of electric vehicles (EVs), the conditions for charging EV batteries with available current patterns should allow complete charging in a short time, i.e., less than 5 to 8 h. Therefore, in this study, a new charging condition is investigated for the EV valve-regulated lead/acid battery system, which should allow complete charging of EV battery systems with multi-step constant currents in a much shorter time with longer cycle life and higher energy efficiency compared with two-step constant-current charging. Although a high magnitude of the first current in the two-step constant-current method prolongs cycle life by suppressing the softening of positive active material, too large a charging current magnitude degrades cells due to excess internal evolution of heat. A charging current magnitude of approximately 0.5 C is expected to prolong cycle life further. Three-step charging could also increase the magnitude of charging current in the first step without shortening cycle life. Four-or six-step constant-current methods could shorten the charging time to less than 5 h, as well as yield higher energy efficiency and enhanced cycle life of over 400 cycles compared with two-step charging with the first step current of 0.5 C. Investigation of the degradation mechanism of the batteries revealed that the conditions of multi-step constant-current charging suppressed softening of positive active material and sulfation of negative active material, but, unfortunately, advanced the corrosion of the grids in the positive plates. By adopting improved grids and cooling of the battery system, the multistep constant-current method may enhance the cycle life. (orig.)
Formal Modeling and Analysis of Timed Systems
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Niebert, Peter
This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...
Elliott, Mark A.; du Bois, Naomi
2017-01-01
From the point of view of the cognitive dynamicist the organization of brain circuitry into assemblies defined by their synchrony at particular (and precise) oscillation frequencies is important for the correct correlation of all independent cortical responses to the different aspects of a given complex thought or object. From the point of view of anyone operating complex mechanical systems, i.e., those comprising independent components that are required to interact precisely in time, it follows that the precise timing of such a system is essential – not only essential but measurable, and scalable. It must also be reliable over observations to bring about consistent behavior, whatever that behavior is. The catastrophic consequence of an absence of such precision, for instance that required to govern the interference engine in many automobiles, is indicative of how important timing is for the function of dynamical systems at all levels of operation. The dynamics and temporal considerations combined indicate that it is necessary to consider the operating characteristic of any dynamical, cognitive brain system in terms, superficially at least, of oscillation frequencies. These may, themselves, be forensic of an underlying time-related taxonomy. Currently there are only two sets of relevant and necessarily systematic observations in this field: one of these reports the precise dynamical structure of the perceptual systems engaged in dynamical binding across form and time; the second, derived both empirically from perceptual performance data, as well as obtained from theoretical models, demonstrates a timing taxonomy related to a fundamental operator referred to as the time quantum. In this contribution both sets of theory and observations are reviewed and compared for their predictive consistency. Conclusions about direct comparability are discussed for both theories of cognitive dynamics and time quantum models. Finally, a brief review of some experimental data
Modeling biological rhythms in failure time data
Directory of Open Access Journals (Sweden)
Myles James D
2006-11-01
Full Text Available Abstract Background The human body exhibits a variety of biological rhythms. There are patterns that correspond, among others, to the daily wake/sleep cycle, a yearly seasonal cycle and, in women, the menstrual cycle. Sine/cosine functions are often used to model biological patterns for continuous data, but this model is not appropriate for analysis of biological rhythms in failure time data. Methods We adapt the cosinor method to the proportional hazards model and present a method to provide an estimate and confidence interval of the time when the minimum hazard is achieved. We then apply this model to data taken from a clinical trial of adjuvant of pre-menopausal breast cancer patients. Results The application of this technique to the breast cancer data revealed that the optimal day for pre-resection incisional or excisional biopsy of 28-day cycle (i. e. the day associated with the lowest recurrence rate is day 8 with 95% confidence interval of 4–12 days. We found that older age, fewer positive nodes, smaller tumor size, and experimental treatment were predictive of longer relapse-free survival. Conclusion In this paper we have described a method for modeling failure time data with an underlying biological rhythm. The advantage of adapting a cosinor model to proportional hazards model is its ability to model right censored data. We have presented a method to provide an estimate and confidence interval of the day in the menstrual cycle where the minimum hazard is achieved. This method is not limited to breast cancer data, and may be applied to any biological rhythms linked to right censored data.
Improving Genetic Evaluation of Litter Size Using a Single-step Model
DEFF Research Database (Denmark)
Guo, Xiangyu; Christensen, Ole Fredslund; Ostersen, Tage
A recently developed single-step method allows genetic evaluation based on information from phenotypes, pedigree and markers simultaneously. This paper compared reliabilities of predicted breeding values obtained from single-step method and the traditional pedigree-based method for two litter size...... traits, total number of piglets born (TNB), and litter size at five days after birth (Ls 5) in Danish Landrace and Yorkshire pigs. The results showed that the single-step method combining phenotypic and genotypic information provided more accurate predictions than the pedigree-based method, not only...
Time models and cognitive processes: a review
Directory of Open Access Journals (Sweden)
Michail eManiadakis
2014-02-01
Full Text Available The sense of time is an essential capacity of humans, with a major role in many of the cognitive processes expressed in our daily lifes. So far, in cognitive science and robotics research, mental capacities have been investigated in a theoretical and modelling framework that largely neglects the flow of time. Only recently there has been a small but constantly increasing interest in the temporal aspects of cognition, integrating time into a range of different models of perceptuo-motor capacities. The current paper aims to review existing works in the field and suggest directions for fruitful future work. This is particularly important for the newly developed field of artificial temporal cognition that is expected to significantly contribute in the development of sophisticated artificial agents seamlessly integrated into human societies.
Constitutive model with time-dependent deformations
DEFF Research Database (Denmark)
Krogsbøll, Anette
1998-01-01
are common in time as well as size. This problem is adressed by means of a new constitutive model for soils. It is able to describe the behavior of soils at different deformation rates. The model defines time-dependent and stress-related deformations separately. They are related to each other and they occur......In many geological and Engineering problems it is necessary to transform information from one scale to another. Data collected at laboratory scale are often used to evaluate field problems on a much larger scale. This is certainly true for geological problems where extreme scale differences...... simultanelously. The model is based on concepts from elasticity and viscoplasticity theories. In addition to Hooke's law for the elastic behavior, the framework for the viscoplastic behavior consists, in the general case (two-dimensional or three-dimensional), of a yield surface, an associated flow rule...
Energy Technology Data Exchange (ETDEWEB)
Wood, Claire [CTSI; Bremner, Brenda [CTSI
2013-08-09
The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.
Energy Technology Data Exchange (ETDEWEB)
Wood, Claire [CTSI; Bremner, Brenda [CTSI
2013-08-09
The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.
Olinde, Lindsay; Johnson, Joel P. L.
2015-09-01
We present new measurements of bed load tracer transport in a mountain stream over several snowmelt seasons. Cumulative displacements were measured using passive tracers, which consisted of gravel and cobbles embedded with radio frequency identification tags. The timing of bed load motion during 11 transporting events was quantified with active tracers, i.e., accelerometer-embedded cobbles. Probabilities of cobble transport increased with discharge above a threshold, and exhibited slight to moderate hysteresis during snowmelt hydrographs. Dividing cumulative displacements by the number of movements recorded by each active tracer constrained average step lengths. Average step lengths increased with discharge, and distributions of average step lengths and cumulative displacements were thin tailed. Distributions of rest times followed heavy-tailed power law scaling. Rest time scaling varied somewhat with discharge and with the degree to which tracers were incorporated into the streambed. The combination of thin-tailed displacement distributions and heavy-tailed rest time distributions predict superdiffusive dispersion.
A two-phase model of plantar tissue: a step toward prediction of diabetic foot ulceration.
Sciumè, G; Boso, D P; Gray, W G; Cobelli, C; Schrefler, B A
2014-11-01
A new computational model, based on the thermodynamically constrained averaging theory, has been recently proposed to predict tumor initiation and proliferation. A similar mathematical approach is proposed here as an aid in diabetic ulcer prevention. The common aspects at the continuum level are the macroscopic balance equations governing the flow of the fluid phase, diffusion of chemical species, tissue mechanics, and some of the constitutive equations. The soft plantar tissue is modeled as a two-phase system: a solid phase consisting of the tissue cells and their extracellular matrix, and a fluid one (interstitial fluid and dissolved chemical species). The solid phase may become necrotic depending on the stress level and on the oxygen availability in the tissue. Actually, in diabetic patients, peripheral vascular disease impacts tissue necrosis; this is considered in the model via the introduction of an effective diffusion coefficient that governs transport of nutrients within the microvasculature. The governing equations of the mathematical model are discretized in space by the finite element method and in time domain using the θ-Wilson Method. While the full mathematical model is developed in this paper, the example is limited to the simulation of several gait cycles of a healthy foot.
Lechleitner, Franziska A.; Jamieson, Robert A.; McIntyre, Cameron; Baldini, Lisa M.; Baldini, James U. L.; Eglinton, Timothy I.
2015-04-01
Over the past two decades, speleothems have become one of the most versatile and promising archives for the study of past continental climate. Very precise absolute dating is often possible using the U-Th method, resulting in paleoclimate records of exceptional resolution and accuracy. However, not all speleothems are amenable to this dating method for a variety of reasons (e.g. low U concentrations, high detrital Th etc). This has lead researchers to exclude many otherwise suitable speleothems and cave sites from further investigation. 14C-dating of speleothems has so far not been applicable, due to the 'dead carbon' problem. As drip water percolates through the karst, dissolving CaCO3, a variable amount of 14C-dead carbon is added to the solution. This results in a temporally variable and site-specific reservoir effect, ultimately undermining the development of speleothem 14C -chronologies. However, a number of recent studies have shown a clear link between karst hydrology and associated proxies (e.g., Mg/Ca and δ13C) and this 'dead carbon fraction' (DCF). We take advantage of this relationship to model DCF and its changes using Mg/Ca, δ13C and 14C data from published speleothem records. Using one record for calibration purposes, we build a transfer function for the DCF in relation to δ13C and Mg/Ca, which we then apply to other 14C records. Initial model results are promising; we are able to reconstruct general long-term average DCF within uncertainties of the calculated DCF from the U-Th chronology. Large shifts in DCF related to hydrology are also often detected. In a second step, we apply the model to a speleothem from southern Poland, which so far could not be dated, due to very low U-concentrations. To construct a 14C chronology, the stalagmite was sampled at 5 mm intervals. CaCO3 powders were graphitized and measured by Accelerator Mass Spectrometry (MICADAS) at ETH Zurich. Additional high-resolution (0.1 mm/sample) 14C measurements were performed on
Khuvis, Samuel; Gobbert, Matthias K; Peercy, Bradford E
2015-05-01
Physiologically realistic simulations of computational islets of beta cells require the long-time solution of several thousands of coupled ordinary differential equations (ODEs), resulting from the combination of several ODEs in each cell and realistic numbers of several hundreds of cells in an islet. For a reliable and accurate solution of complex nonlinear models up to the desired final times on the scale of several bursting periods, an appropriate ODE solver designed for stiff problems is eventually a necessity, since other solvers may not be able to handle the problem or are exceedingly inefficient. But stiff solvers are potentially significantly harder to use, since their algorithms require at least an approximation of the Jacobian matrix. For sophisticated models, systems of several complex ODEs in each cell, it is practically unworkable to differentiate these intricate nonlinear systems analytically and to manually program the resulting Jacobian matrix in computer code. This paper demonstrates that automatic differentiation can be used to obtain code for the Jacobian directly from code for the ODE system, which allows a full accounting for the sophisticated model equations. This technique is also feasible in source-code languages Fortran and C, and the conclusions apply to a wide range of systems of coupled, nonlinear reaction equations. However, when we combine an appropriately supplied Jacobian with slightly modified memory management in the ODE solver, simulations on the realistic scale of one thousand cells in the islet become possible that are several orders of magnitude faster than the original solver in the software Matlab, a language that is particularly user friendly for programming complicated model equations. We use the efficient simulator to analyze electrical bursting and show non-monotonic average burst period between fast and slow cells for increasing coupling strengths. We also find that interestingly, the arrangement of the connected fast
Fisher Information Framework for Time Series Modeling
Venkatesan, R C
2016-01-01
A robust prediction model invoking the Takens embedding theorem, whose \\textit{working hypothesis} is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the \\textit{working hypothesis} satisfy a time independent Schr\\"{o}dinger-like equation in a vector setting. The inference of i) the probability density function of the coefficients of the \\textit{working hypothesis} and ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defi...
Schuck, Peter W.; Linton, Mark; Muglach, Karin; Welsch, Brian; Hageman, Jacob
2010-01-01
The imminent launch of Solar Dynamics Observatory (SDO) will carry the first full-disk imaging vector magnetograph, the Helioseismic and Magnetic Imager (HMI), into an inclined geosynchronous orbit. This magnetograph will provide nearly continuous measurements of photospheric vector magnetic fields at cadences of 90 seconds to 12 minutes with I" resolution, precise pointing, and unfettered by atmospheric seeing. The enormous data stream of 1.5 Terabytes per day from SDO will provide an unprecedented opportunity to understand the mysteries of solar eruptions. These ground-breaking observations will permit the application of a new technique, the differential affine velocity estimator for vector magnetograms (DAVE4VM), to measure photospheric plasma flows in active regions. These measurements will permit, for the first time, accurate assessments of the coronal free energy available for driving CMEs and flares. The details of photospheric plasma flows, particularly along magnetic neutral-lines, are critical to testing models for initiating coronal mass ejections (CMEs) and flares. Assimilating flows and fields into state-of-the art 3D MHD simulations that model the highly stratified solar atmosphere from the convection zone to the corona represents the next step towards achieving NASA's Living with a Star forecasting goals of predicting "when a solar eruption leading to a CME will occur." This talk will describe these major science and predictive advances that will be delivered by SDO /HMI.
Energy Technology Data Exchange (ETDEWEB)
Lawrenz, M.
2007-10-30
In the present work the dynamics of CO-molecules on a stepped Pt(111)-surface induced by fs-laser pulses at low temperatures was studied by using laser spectroscopy. In the first part of the work, the laser-induced diffusion for the CO/Pt(111)-system could be demonstrated and modelled successfully for step diffusion. At first, the diffusion of CO-molecules from the step sites to the terrace sites on the surface was traced. The experimentally discovered energy transfer time of 500 fs for this process confirms the assumption of an electronically induced process. In the following it was explained how the experimental results were modelled. A friction coefficient which depends on the electron temperature yields a consistent model, whereas for the understanding of the fluence dependence and time-resolved measurements parallel the same set of parameters was used. Furthermore, the analysis was extended to the CO-terrace diffusion. Small coverages of CO were adsorbed to the terraces and the diffusion was detected as the temporal evolution of the occupation of the step sites acting as traps for the diffusing molecules. The additional performed two-pulse correlation measurements also indicate an electronically induced process. At the substrate temperature of 40 K the cross-correlation - where an energy transfer time of 1.8 ps was extracted - suggests also an electronically induced energy transfer mechanism. Diffusion experiments were performed for different substrate temperatures. (orig.)
Directory of Open Access Journals (Sweden)
Craig Cora L
2011-06-01
Full Text Available Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Results Girls had a lower median steps/day (10682 versus 11059 for boys and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys. 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, Discussion Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by promoting excess energy intake. Conclusions In this large nationally representative sample, pedometer-determined steps/day was associated with reduced odds of being obese (but not overweight whereas each parent-reported hour spent watching television between school and dinner increased the odds of both overweight and obesity.
Linear Parametric Model Checking of Timed Automata
DEFF Research Database (Denmark)
Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle
2001-01-01
of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach......We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...
Time series modeling for automatic target recognition
Sokolnikov, Andre
2012-05-01
Time series modeling is proposed for identification of targets whose images are not clearly seen. The model building takes into account air turbulence, precipitation, fog, smoke and other factors obscuring and distorting the image. The complex of library data (of images, etc.) serving as a basis for identification provides the deterministic part of the identification process, while the partial image features, distorted parts, irrelevant pieces and absence of particular features comprise the stochastic part of the target identification. The missing data approach is elaborated that helps the prediction process for the image creation or reconstruction. The results are provided.
Directory of Open Access Journals (Sweden)
Anderson DR
2016-11-01
Full Text Available Daren R Anderson,1 Ianita Zlateva,1 Emil N Coman,2 Khushbu Khatri,1 Terrence Tian,1 Robert D Kerns3 1Weitzman Institute, Community Health Center, Inc., Middletown, 2UCONN Health Disparities Institute, University of Connecticut, Farmington, 3VA Connecticut Healthcare System, West Haven, CT, USA Purpose: Treating pain in primary care is challenging. Primary care providers (PCPs receive limited training in pain care and express low confidence in their knowledge and ability to manage pain effectively. Models to improve pain outcomes have been developed, but not formally implemented in safety net practices where pain is particularly common. This study evaluated the impact of implementing the Stepped Care Model for Pain Management (SCM-PM at a large, multisite Federally Qualified Health Center. Methods: The Promoting Action on Research Implementation in Health Services framework guided the implementation of the SCM-PM. The multicomponent intervention included: education on pain care, new protocols for pain assessment and management, implementation of an opioid management dashboard, telehealth consultations, and enhanced onsite specialty resources. Participants included 25 PCPs and their patients with chronic pain (3,357 preintervention and 4,385 postintervention cared for at Community Health Center, Inc. Data were collected from the electronic health record and supplemented by chart reviews. Surveys were administered to PCPs to assess knowledge, attitudes, and confidence. Results: Providers’ pain knowledge scores increased to an average of 11% from baseline; self-rated confidence in ability to manage pain also increased. Use of opioid treatment agreements and urine drug screens increased significantly by 27.3% and 22.6%, respectively. Significant improvements were also noted in documentation of pain, pain treatment, and pain follow-up. Referrals to behavioral health providers for patients with pain increased by 5.96% (P=0.009. There was no
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
Baudron, Anne-Marie A -M; Maday, Yvon; Riahi, Mohamed Kamel; Salomon, Julien
2014-01-01
We present a parareal in time algorithm for the simulation of neutron diffusion transient model. The method is made efficient by means of a coarse solver defined with large time steps and steady control rods model. Using finite element for the space discretization, our implementation provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch-Maurer-Werner (LMW) benchmark [1].
Modelling of Patterns in Space and Time
Murray, James
1984-01-01
This volume contains a selection of papers presented at the work shop "Modelling of Patterns in Space and Time", organized by the 80nderforschungsbereich 123, "8tochastische Mathematische Modelle", in Heidelberg, July 4-8, 1983. The main aim of this workshop was to bring together physicists, chemists, biologists and mathematicians for an exchange of ideas and results in modelling patterns. Since the mathe matical problems arising depend only partially on the particular field of applications the interdisciplinary cooperation proved very useful. The workshop mainly treated phenomena showing spatial structures. The special areas covered were morphogenesis, growth in cell cultures, competition systems, structured populations, chemotaxis, chemical precipitation, space-time oscillations in chemical reactors, patterns in flames and fluids and mathematical methods. The discussions between experimentalists and theoreticians were especially interesting and effective. The editors hope that these proceedings reflect ...
Time Series Modelling using Proc Varmax
DEFF Research Database (Denmark)
Milhøj, Anders
2007-01-01
In this paper it will be demonstrated how various time series problems could be met using Proc Varmax. The procedure is rather new and hence new features like cointegration, testing for Granger causality are included, but it also means that more traditional ARIMA modelling as outlined by Box & Je...... & Jenkins is performed in a more modern way using the computer resources which are now available...
Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R
2006-12-15
We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.
Molnar, Melissa; Marek, C. John
2004-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3
Adhere, Degrade, and Move: The Three-Step Model of Invasion.
Liotta, Lance A
2016-06-01
Experimental work during the period 1975-1986 revealed the crucial importance of the extracellular matrix (ECM) in tumor cell invasion and metastasis, and culminated in the three-step hypothesis of invasion. The first step is tumor cell attachment to the ECM. The second step is proteolytic degradation of the ECM, led by advancing protruding actin-rich pseudopods. The third step is migration of the tumor cell body through the remodeled matrix. This mechanistic scheme is widely accepted and continues to generate insights related to the large number of molecules, inside and outside invading cells, which all play a role in each of the three steps. Understanding the interaction of the tumor cells with the ECM has never been more clinically important. The ECM is not just a passive mechanical scaffold. Instead, the ECM is an active participant in neoplastic and physiologic invasion, and acts as an information highway, an immune sanctuary, and a storage depot supporting tumor growth and drug resistance. Cancer Res; 76(11); 3115-7. ©2016 AACRSee related article by Liotta, Cancer Res 1986;46:1-7Visit the Cancer Research 75(th) Anniversary timeline.
Directory of Open Access Journals (Sweden)
Kawamura,Kenji
1991-06-01
Full Text Available Determination was made of step length, stride width, time factors and deviation in the center of pressure during up- and downslope walking in 17 healthy men between the ages of 19 and 34 using a force plate. Slope inclinations were set at 3, 6, 9 and 12 degrees. At 12 degrees, walking speed, the product of step length and cadence, decreased significantly (p less than 0.01 in both up- and downslope walking. The most conspicuous phenomenon in upslope walking was in cadence. The steeper the slope, the smaller was the cadence. The most conspicuous phenomenon in downslope walking was in step length. The steeper the slope, the shorter was the step length.
Kawamura, K; Tokuhiro, A; Takechi, H
1991-06-01
Determination was made of step length, stride width, time factors and deviation in the center of pressure during up- and downslope walking in 17 healthy men between the ages of 19 and 34 using a force plate. Slope inclinations were set at 3, 6, 9 and 12 degrees. At 12 degrees, walking speed, the product of step length and cadence, decreased significantly (p less than 0.01) in both up- and downslope walking. The most conspicuous phenomenon in upslope walking was in cadence. The steeper the slope, the smaller was the cadence. The most conspicuous phenomenon in downslope walking was in step length. The steeper the slope, the shorter was the step length.
Time series modeling for syndromic surveillance
Directory of Open Access Journals (Sweden)
Mandl Kenneth D
2003-01-01
Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool
A generalized additive regression model for survival times
DEFF Research Database (Denmark)
Scheike, Thomas H.
2001-01-01
Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...
A generalized additive regression model for survival times
DEFF Research Database (Denmark)
Scheike, Thomas H.
2001-01-01
Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...
Greenhouse Modeling Using Continuous Timed Petri Nets
Directory of Open Access Journals (Sweden)
José Luis Tovany
2013-01-01
Full Text Available This paper presents a continuous timed Petri nets (ContPNs based greenhouse modeling methodology. The presented methodology is based on the definition of elementary ContPN modules which are designed to capture the components of a general energy and mass balance differential equation, like parts that are reducing or increasing variables, such as heat, CO2 concentration, and humidity. The semantics of ContPN is also extended in order to deal with variables depending on external greenhouse variables, such as solar radiation. Each external variable is represented by a place whose marking depends on an a priori known function, for instance, the solar radiation function of the greenhouse site, which can be obtained statistically. The modeling methodology is illustrated with a greenhouse modeling example.
Modeling utilization distributions in space and time.
Keating, Kim A; Cherry, Steve
2009-07-01
W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r = 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep (Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed.
Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.
2013-12-01
Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.
The objective of this study was to develop a new approach using a one-step approach to directly construct predictive models for describing the growth of Salmonella Enteritidis (SE) in liquid egg white (LEW) and egg yolk (LEY). A five-strain cocktail of SE, induced to resist rifampicin at 100 mg/L, ...
Köhn, C.; Ebert, U.
2012-01-01
We model the energy resolved angular distribution of TGFs and of positrons produced by a negative lightning leader stepping upwards in a thundercloud. First we present our new results for doubly differential cross sections for Bremsstrahlung and pair production based on the triply differential cross
Designers Workbench: Towards Real-Time Immersive Modeling
Energy Technology Data Exchange (ETDEWEB)
Kuester, F; Duchaineau, M A; Hamann, B; Joy, K I; Ma, K L
2001-10-03
This paper introduces the DesignersWorkbench, a semi-immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates from a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The DesignersWorkbench aims at closing this technology or ''digital gap'' experienced by design and CAD engineers by transforming the classical design paradigm into its filly integrated digital and virtual analog allowing collaborative development in a semi-immersive virtual environment. This project emphasizes two key components from the classical product design cycle: freeform modeling and analysis. In the freeform modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
Designers workbench: toward real-time immersive modeling
Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu
2000-05-01
This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
Designers Workbench: Towards Real-Time Immersive Modeling
Energy Technology Data Exchange (ETDEWEB)
Kuester, F; Duchaineau, M A; Hamann, B; Joy, K I; Ma, K L
2001-10-03
This paper introduces the DesignersWorkbench, a semi-immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates from a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The DesignersWorkbench aims at closing this technology or ''digital gap'' experienced by design and CAD engineers by transforming the classical design paradigm into its filly integrated digital and virtual analog allowing collaborative development in a semi-immersive virtual environment. This project emphasizes two key components from the classical product design cycle: freeform modeling and analysis. In the freeform modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
Modelling Limit Order Execution Times from Market Data
Kim, Adlar; Farmer, Doyne; Lo, Andrew
2007-03-01
Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.
Modelling tourists arrival using time varying parameter
Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.
2017-06-01
The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.
A Two-Step Model for Assessing Relative Interest in E-Books Compared to Print
Knowlton, Steven A.
2016-01-01
Librarians often wish to know whether readers in a particular discipline favor e-books or print books. Because print circulation and e-book usage statistics are not directly comparable, it can be hard to determine the relative interest of readers in the two types of books. This study demonstrates a two-step method by which librarians can assess…
Multi-step Attack Modelling and Simulation (MsAMS) Framework based on Mobile Ambients
Nunes Leal Franqueira, V.; Lopes, R H C; van Eck, Pascal
Attackers take advantage of any security breach to penetrate an organisation perimeter and exploit hosts as stepping stones to reach valuable assets, deeper in the network. The exploitation of hosts is possible not only when vulnerabilities in commercial off-the-shelf (COTS) software components are
Multi-step Attack Modelling and Simulation (MsAMS) Framework based on Mobile Ambients
Nunes Leal Franqueira, V.; Lopes, R H C; Eck, van P.A.T.
2008-01-01
Attackers take advantage of any security breach to penetrate an organisation perimeter and exploit hosts as stepping stones to reach valuable assets, deeper in the network. The exploitation of hosts is possible not only when vulnerabilities in commercial off-the-shelf (COTS) software components ar
Joint modeling of longitudinal data and discrete-time survival outcome.
Qiu, Feiyou; Stein, Catherine M; Elston, Robert C
2016-08-01
A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0.
Modeling ventilation time in forage tower silos.
Bahloul, A; Chavez, M; Reggio, M; Roberge, B; Goyer, N
2012-10-01
The fermentation process in forage tower silos produces a significant amount of gases, which can easily reach dangerous concentrations and constitute a hazard for silo operators. To maintain a non-toxic environment, silo ventilation is applied. Literature reviews show that the fermentation gases reach high concentrations in the headspace of a silo and flow down the silo from the chute door to the feed room. In this article, a detailed parametric analysis of forced ventilation scenarios built via numerical simulation was performed. The methodology is based on the solution of the Navier-Stokes equations, coupled with transport equations for the gas concentrations. Validation was achieved by comparing the numerical results with experimental data obtained from a scale model silo using the tracer gas testing method for O2 and CO2 concentrations. Good agreement was found between the experimental and numerical results. The set of numerical simulations made it possible to establish a simple analytical model to predict the minimum time required to ventilate a silo to make it safe to enter. This ventilation time takes into account the headspace above the forage, the airflow rate, and the initial concentrations of O2 and CO2. The final analytical model was validated with available results from the literature.
RTMOD: Real-Time MODel evaluation
Energy Technology Data Exchange (ETDEWEB)
Graziani, G; Galmarini, S. [Joint Research centre, Ispra (Italy); Mikkelsen, T. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept. (Denmark)
2000-01-01
The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime
Stepped care model of pain management and quality of pain care in long-term opioid therapy.
Moore, Brent A; Anderson, Daren; Dorflinger, Lindsey; Zlateva, Ianita; Lee, Allison; Gilliam, Wesley; Tian, Terrence; Khatri, Khushbu; Ruser, Christopher B; Kerns, Robert D
2016-01-01
Successful organizational improvement processes depend on application of reliable metrics to establish targets and to monitor progress. This study examined the utility of the Pain Care Quality (PCQ) extraction tool in evaluating implementation of the Stepped Care Model for Pain Management at one Veterans Health Administration (VHA) healthcare system over 4 yr and in a non-VHA Federally qualified health center (FQHC) over 2 yr. Two hundred progress notes per year from VHA and 150 notes per year from FQHC primary care prescribers of long-term opioid therapy (>90 consecutive days) were randomly sampled. Each note was coded for the presence or absence of key dimensions of PCQ (i.e., pain assessment, treatment plans, pain reassessment/outcomes, patient education). General estimating equations controlling for provider and facility were used to examine changes in PCQ items over time. Improvements in the VHA were noted in pain reassessment and patient education, with trends in positive directions for all dimensions. Results suggest that the PCQ extraction tool is feasible and may be responsive to efforts to promote organizational improvements in pain care. Future research is indicated to improve the reliability of the PCQ extraction tool and enhance its usability.
Stepped care model for pain management and quality of pain care in long-term opioid therapy
Directory of Open Access Journals (Sweden)
Brent A. Moore, PhD
2016-02-01
Full Text Available Successful organizational improvement processes depend on application of reliable metrics to establish targets and to monitor progress. This study examined the utility of the Pain Care Quality (PCQ extraction tool in evaluating implementation of the Stepped Care Model for Pain Management at one Veterans Health Administration (VHA healthcare system over 4 yr and in a non-VHA Federally qualified health center (FQHC over 2 yr. Two hundred progress notes per year from VHA and 150 notes per year from FQHC primary care prescribers of long-term opioid therapy (>90 consecutive days were randomly sampled. Each note was coded for the presence or absence of key dimensions of PCQ (i.e., pain assessment, treatment plans, pain reassessment/outcomes, patient education. General estimating equations controlling for provider and facility were used to examine changes in PCQ items over time. Improvements in the VHA were noted in pain reassessment and patient education, with trends in positive directions for all dimensions. Results suggest that the PCQ extraction tool is feasible and may be responsive to efforts to promote organizational improvements in pain care. Future research is indicated to improve the reliability of the PCQ extraction tool and enhance its usability.
Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov
2014-01-01
Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.
Energy Technology Data Exchange (ETDEWEB)
Ducomet, B
2000-07-01
We review some models of self-gravitating fluids, used to described in a unified frame work collective vibration modes of heavy nuclei, and large time evolution of radiation and reacting stars. (authors)
Outlier Detection in Structural Time Series Models
DEFF Research Database (Denmark)
Marczak, Martyna; Proietti, Tommaso
investigate via Monte Carlo simulations how this approach performs for detecting additive outliers and level shifts in the analysis of nonstationary seasonal time series. The reference model is the basic structural model, featuring a local linear trend, possibly integrated of order two, stochastic seasonality......Structural change affects the estimation of economic signals, like the underlying growth rate or the seasonally adjusted series. An important issue, which has attracted a great deal of attention also in the seasonal adjustment literature, is its detection by an expert procedure. The general...... and a stationary component. Further, we apply both kinds of indicator saturation to detect additive outliers and level shifts in the industrial production series in five European countries....
RTMOD: Real-Time MODel evaluation
DEFF Research Database (Denmark)
Graziani, G.; Galmarini, S.; Mikkelsen, Torben
2000-01-01
. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results...
Al-Shakran, Mohammad; Kibler, Ludwig A.; Jacob, Timo; Ibach, Harald; Beltramo, Guillermo L.; Giesen, Margret
2016-09-01
This is Part I of two closely related papers, where we show that the specific adsorption of anions leads to a failure of the nearest-neighbor Ising model to describe island perimeter curvatures on Au(100) electrodes in dilute KBr, HCl and H2SO4 electrolytes and the therewith derived step diffusivity vs. step orientation. This result has major consequences for theoretical studies aiming at the understanding of growth, diffusion and degradation phenomena. Part I focuses on the experimental data. As shown theoretically in detail in Part II (doi:10.1016/j.susc.2016.03.022), a set of nearest-neighbor and next-nearest-neighbor interaction energies (ɛNN, ɛNNN) can uniquely be derived from the diffusivity of steps along and . We find strong repulsive next-nearest neighbor (NNN) interaction in KBr and HCl, whereas NNN interaction is negligibly for H2SO4. The NNN repulsive interaction energy ɛNNN therefore correlates positively with the Gibbs adsorption energy of the anions. We find furthermore that ɛNNN increases with increasing Br- and Cl- coverage. The results for ɛNN and ɛNNN are quantitatively consistent with the coverage dependence of the step line tension. We thereby establish a sound experimental base for theoretical studies on the energetics of steps in the presence of specific adsorption.
Rouwet, Dmitri
2016-04-01
evaporative degassing plumes can be useful as monitoring tool on the short-term, but only if the underlying process of gas flushing through acidic lakes is better understood, and linked with the lake water chemistry; (2) The second method forgets about chemical kinetics, degassing models and dynamics of phreatic eruptions, and sticks to the classical principle in geology of "the past is the key for the future". How did lake chemistry parameters vary during the various stages of unrest and eruption, on a purely mathematical basis? Can we recognise patterns in the numerical values related to the changes in volcanic activity? Water chemistry only as a monitoring tool for extremely dynamic and erupting crater lake systems, is inefficient in revealing short-term precursors for single phreatic eruptions, within the current perspective of the residence time dependent monitoring time window. The monitoring rules established since decades based only on water chemistry have thus somehow become obsolete and need revision.
Konowrocki, Robert; Szolc, Tomasz; Pochanke, Andrzej; Pręgowska, Agnieszka
2016-03-01
This paper aims to investigate, both experimentally and theoretically, the electromechanical dynamic interaction between a driving stepping motor and a driven laboratory belt-transporter system. A test-rig imitates the operation of a robotic device in the form of a working tool-carrier under translational motion. The object under consideration is equipped with measurement systems, which enable the registration of electrical and mechanical quantities. Analytical considerations are performed by means of a circuit model of the electric motor and a discrete, non-linear model of the mechanical system. Various scenarios of the working tool-carrier motion and positioning by the belt-transporter are measured and simulated; in all cases the electric current control of the driving motor has been applied. The main goal of this study is to investigate the influence of the stepping motor control parameters along with various mechanical friction models on the precise positioning of a laboratory robotic device.
de Koning, S.; Kaal, E.; Janssen, H.-G.; van Platerink, C.; Brinkman, U.A.Th.
2008-01-01
The feasibility of a versatile system for multi-step direct thermal desorption (DTD) coupled to comprehensive gas chromatography (GC × GC) with time-of-flight mass spectrometric (TOF-MS) detection is studied. As an application the system is used for the characterization of fresh versus aged olive oi
Tucker Cohen, Elisabeth; Heller, Kathryn Wolff; Alberto, Paul; Fredrick, Laura D.
2008-01-01
The use of a three-step decoding strategy with constant time delay for teaching decoding and word reading to students with mild and moderate mental retardation was investigated in this study. A multiple probe design was used to examine the percentage of words correctly decoded and read as well as the percentage of sounds correctly decoded. The…
Khamis, Nehal N; Satava, Richard M; Alnassar, Sami A; Kern, David E
2016-01-01
Despite the rapid growth in the use of simulation in health professions education, courses vary considerably in quality. Many do not integrate efficiently into an overall school/program curriculum or conform to academic accreditation requirements. Moreover, some of the guidelines for simulation design are specialty specific. We designed a model that integrates best practices for effective simulation-based training and a modification of Kern et al.'s 6-step approach for curriculum development. We invited international simulation and health professions education experts to complete a questionnaire evaluating the model. We reviewed comments and suggested modifications from respondents and reached consensus on a revised version of the model. We recruited 17 simulation and education experts. They expressed a consensus on the seven proposed curricular steps: problem identification and general needs assessment, targeted needs assessment, goals and objectives, educational strategies, individual assessment/feedback, program evaluation, and implementation. We received several suggestions for descriptors that applied the steps to simulation, leading to some revisions in the model. We have developed a model that integrates principles of curriculum development and simulation design that is applicable across specialties. Its use could lead to high-quality simulation courses that integrate efficiently into an overall curriculum.
Modeling of a stair-climbing wheelchair mechanism with high single-step capability.
Lawn, Murray J; Ishimatsu, Takakazu
2003-09-01
In the field of providing mobility for the elderly and disabled, the aspect of dealing with stairs continues largely unresolved. This paper focuses on presenting the development of a stair-climbing wheelchair mechanism with high single-step capability. The mechanism is based on front and rear wheel clusters connected to the base (chair) via powered linkages so as to permit both autonomous stair ascent and descent in the forward direction, and high single-step functionality for such as direct entry to and from a van. Primary considerations were inherent stability, provision of a mechanism that is physically no larger than a standard powered wheelchair, aesthetics, and being based on readily available low-cost components.
Real-time DIRCM system modeling
Petersson, Mikael
2004-12-01
Directed infrared countermeasures (DIRCM) play an increasingly important role in electronic warfare to counteract threats posed by infrared seekers. The usefulness and performance of such countermeasures depend, for example, on atmospheric conditions (attenuation and turbulence) and platform vibrations, causing pointing and tracking errors for the laser beam and reducing the power transferred to the seeker aperture. These problems make it interesting to simulate the performance of a DIRCM system in order to understand how easy or difficult it is to counteract an approaching threat and evaluate limiting factors in various situations. This paper describes a DIRCM model that has been developed, including atmospheric effects such as attenuation and turbulence as well as closed loop tracking algorithms, where the retro reflex of the laser is used for the pointing control of the beam. The DIRCM model is part of a large simulation framework (EWSim), which also incorporates several descriptions of different seekers (e.g. reticle, rosette, centroid, nutating cross) and models of robot dynamics. Effects of a jamming laser on a specific threat can be readily verified by simulations within this framework. The duel between missile and countermeasure is simulated in near real-time and visualized graphically in 3D. A typical simulation with a reticle seeker jammed by a modulated laser is included in the paper.
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.
Marek, C. John; Molnar, Melissa
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.
Castagnoli, G C
1999-01-01
In former work, quantum computation has been shown to be a problem solving process essentially affected by both the reversible dynamics leading to the state before measurement, and the logical-mathematical constraints introduced by quantum measurement (in particular, the constraint that there is only one measurement outcome). This dual influence, originated by independent initial and final conditions, justifies the quantum computation speed-up and is not representable inside dynamics, namely as a one-way propagation. In this work, we reformulate von Neumann's model of quantum measurement at the light of above findings. We embed it in a broader representation based on the quantum logic gate formalism and capable of describing the interplay between dynamical and non-dynamical constraints. The two steps of the original model, namely (1) dynamically reaching a complete entanglement between pointer and quantum object and (2) enforcing the one-outcome-constraint, are unified and reversed. By representing step (2) r...
Moles, Pamela; Oliva, Mónica; Safont, Vicent S
2011-01-20
By using 6,7,8-trioxabicyclo[3.2.2]nonane as the artemisinin model and dihydrated Fe(OH)(2) as the heme model, we report a theoretical study of the late steps of the artemisinin decomposition process. The study offers two viewpoints: first, the energetic and geometric parameters are obtained and analyzed, and hence, different reaction paths have been studied. The second point of view uses the electron localization function (ELF) and the atoms in molecules (AIM) methodology, to conduct a complete topological study of such steps. The MO analysis together with the spin density description has also been used. The obtained results agree nicely with the experimental data, and a new mechanistic proposal that explains the experimentally determined outcome of deoxiartemisinin has been postulated.
A two-step model for Langerhans cell migration to skin-draining LN
2008-01-01
Although the role of Langerhans cells (LC) in skin immune responses is still a matter of debate, it is known that LC require the chemokine receptor CCR7 for migrating to skin-draining LN. A report in the current issue of the European Journal of Immunology unfolds some of the intricacies of LC migration, showing that LC need CXCR4, but not CCR7, for their migration from the epidermis to the dermis. Thus, LC migration to skin-draining LN occurs in two distinct phases: a first step from the epid...
Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M
2012-09-21
The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters.
DEFF Research Database (Denmark)
Nielsen, A. C. Y.; Bottiger, B.; Midgley, S. E.
2013-01-01
and human parechovirus type 3 had a similar seasonal pattern with a peak during the summer and autumn. Human parechovirus type 3 was almost invariably found in children less than 4 months of age. In conclusion, a multiplex assay was developed allowing simultaneous detection of 2 viruses, which can cause......As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay....... The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel...
Time series models of symptoms in schizophrenia.
Tschacher, Wolfgang; Kupper, Zeno
2002-12-15
The symptom courses of 84 schizophrenia patients (mean age: 24.4 years; mean previous admissions: 1.3; 64% males) of a community-based acute ward were examined to identify dynamic patterns of symptoms and to investigate the relation between these patterns and treatment outcome. The symptoms were monitored by systematic daily staff ratings using a scale composed of three factors: psychoticity, excitement, and withdrawal. Patients showed moderate to high symptomatic improvement documented by effect size measures. Each of the 84 symptom trajectories was analyzed by time series methods using vector autoregression (VAR) that models the day-to-day interrelations between symptom factors. Multiple and stepwise regression analyses were then performed on the basis of the VAR models. Two VAR parameters were found to be associated significantly with favorable outcome in this exploratory study: 'withdrawal preceding a reduction of psychoticity' as well as 'excitement preceding an increase of withdrawal'. The findings were interpreted as generating hypotheses about how patients cope with psychotic episodes.
Accurate 2D/3D electromagnetic modeling for time-domain airborne EM systems
Yin, C.; Hodges, G.
2012-12-01
The existing industry software cannot deliver correct results for 3D time-domain airborne EM responses. In this paper, starting from the Fourier transform and convolution, we compare the stability of different modeling techniques and analyze the reason for instable calculations of the time-domain airborne EM responses. We find that the singularity of the impulse responses of EM systems at very early time that are used in the convolution is responsible for the instability of the modeling (Fig.1). Based on this finding, we put forward an algorithm that uses step response rather than impulse response of the airborne EM system for the convolution and create a stable algorithm that delivers precise results and maintains well the integral/derivative relationship between the magnetic field B and the magnetic induction dB/dt. A three-step transformation procedure for the modeling is proposed: 1) output the frequency-domain EM response data from the existing software; 2) transform into step-response by digital Fourier/Hankel transform; 3) convolve the step response with the transmitting current or its derivatives. The method has proved to be working very well (Fig. 2). The algorithm can be extended to the modeling of other time-domain ground and airborne EM system responses.Fig. 1: Comparison of impulse and step responses for an airborne EM system Fig. 2: Bz and dBz/dt calculated from step (middle panel) and impulse responses (lower panel) for the same 3D model as in Fig.1.
Magana, Donny; Parul, Dzmitry; Dyer, R Brian; Shreve, Andrew P
2011-05-01
Time-resolved step-scan Fourier transform infrared (FT-IR) spectroscopy has been shown to be invaluable for studying excited-state structures and dynamics in both biological and inorganic systems. Despite the established utility of this method, technical challenges continue to limit the data quality and more wide ranging applications. A critical problem has been the low laser repetition rate and interferometer stepping rate (both are typically 10 Hz) used for data acquisition. Here we demonstrate significant improvement in the quality of time-resolved spectra through the use of a kHz repetition rate laser to achieve kHz excitation and data collection rates while stepping the spectrometer at 200 Hz. We have studied the metal-to-ligand charge transfer excited state of Ru(bipyridine)(3)Cl(2) in deuterated acetonitrile to test and optimize high repetition rate data collection. Comparison of different interferometer stepping rates reveals an optimum rate of 200 Hz due to minimization of long-term baseline drift. With the improved collection efficiency and signal-to-noise ratio, better assignments of the MLCT excited-state bands can be made. Using optimized parameters, carbonmonoxy myoglobin in deuterated buffer is also studied by observing the infrared signatures of carbon monoxide photolysis upon excitation of the heme. We conclude from these studies that a substantial increase in performance of ss-FT-IR instrumentation is achieved by coupling commercial infrared benches with kHz repetition rate lasers.
Reyes-Palomares, Armando; Sánchez-Jiménez, Francisca; Medina, Miguel Ángel
2009-05-01
A comprehensive understanding of biological functions requires new systemic perspectives, such as those provided by systems biology. Systems biology approaches are hypothesis-driven and involve iterative rounds of model building, prediction, experimentation, model refinement, and development. Developments in computer science are allowing for ever faster numerical simulations of mathematical models. Mathematical modeling plays an essential role in new systems biology approaches. As a complex, integrated system, metabolism is a suitable topic of study for systems biology approaches. However, up until recently, this topic has not been properly covered in biochemistry courses. This communication reports the development and implementation of a practical lesson plan on metabolic modeling and simulation.
On estimation and tests of time-varying effects in the proportional hazards model
DEFF Research Database (Denmark)
Scheike, Thomas Harder; Martinussen, Torben
Grambsch and Therneau (1994) suggest to use Schoenfeld's Residuals to investigate whether some of the regression coefficients in Cox'(1972) proportional hazards model are time-dependent. Their method is a one-step procedure based on Cox' initial estimate. We suggest an algorithm which in the first...
Modeling Departure Time Choice with Stochastic Networks
Li, H.; Bliemer, M.C.J.; Bovy, P.H.L.
2010-01-01
Stochastic supply and fluctuating travel demand lead to stochasticity in travel times and travel costs experienced by travelers from time to time within a day and at the same time from day to day. Many studies show that travel time un-reliability has significant impacts on traveler’s choice behavior
Computer Aided Continuous Time Stochastic Process Modelling
DEFF Research Database (Denmark)
Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay
2001-01-01
A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...
Optimal model-free prediction from multivariate time series.
Runge, Jakob; Donner, Reik V; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.
Two-Step Acceleration Model of Cosmic Rays at Middle-Aged SNR
Inoue, Tsuyoshi; Inutsuka, Shu-ichiro
2010-01-01
Recent gamma-ray observations of middle-aged supernova remnants revealed a mysterious broken power-law spectrum. Using three-dimensional magnetohydrodynamics simulations, we show that the interaction between a supernova blast wave and interstellar clouds formed by thermal instability generates multiple reflected shocks. The typical Mach numbers of the reflected shocks are shown to be M ~ 2 depending on the density contrast between the diffuse intercloud gas and clouds. These secondary shocks can further energize cosmic-ray particles originally accelerated at the blast-wave shock. This "two-step" acceleration scenario reproduces the observed gamma-ray spectrum and predicts the high-energy spectral index ranging approximately from 3 to 4.
El Gharamti, Mohamad
2015-11-26
The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation strategy. In this study, we introduce a new smoothing-based joint EnKF scheme, in which we introduce a one-step-ahead smoothing of the state before updating the parameters. Numerical experiments are performed with a two-dimensional synthetic subsurface contaminant transport model. The improved performance of the proposed joint EnKF scheme compared to the standard joint EnKF compensates for the modest increase in the computational cost.
Time-varying parameter auto-regressive models for autocovariance nonstationary time series
Institute of Scientific and Technical Information of China (English)
FEI WanChun; BAI Lun
2009-01-01
In this paper,autocovariance nonstationary time series is clearly defined on a family of time series.We propose three types of TVPAR (time-varying parameter auto-regressive) models:the full order TVPAR model,the time-unvarying order TVPAR model and the time-varying order TVPAR model for autocovariance nonstationary time series.Related minimum AIC (Akaike information criterion) estimations are carried out.