WorldWideScience

Sample records for model time step

  1. An adaptive time-stepping strategy for solving the phase field crystal model

    International Nuclear Information System (INIS)

    Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua

    2013-01-01

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations

  2. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  3. Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data

    Science.gov (United States)

    Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.

    2012-12-01

    In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.

  4. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  5. The importance of time-stepping errors in ocean models

    Science.gov (United States)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  6. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  7. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  8. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  9. [Collaborative application of BEPS at different time steps.

    Science.gov (United States)

    Lu, Wei; Fan, Wen Yi; Tian, Tian

    2016-09-01

    BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.

  10. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    Science.gov (United States)

    Hoepfer, Matthias

    co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.

  11. Time dependent theory of two-step absorption of two pulses

    Energy Technology Data Exchange (ETDEWEB)

    Rebane, Inna, E-mail: inna.rebane@ut.ee

    2015-09-25

    The time dependent theory of two step-absorption of two different light pulses with arbitrary duration in the electronic three-level model is proposed. The probability that the third level is excited at the moment t is found in depending on the time delay between pulses, the spectral widths of the pulses and the energy relaxation constants of the excited electronic levels. The time dependent perturbation theory is applied without using “doorway–window” approach. The time and spectral behavior of the spectrum using in calculations as simple as possible model is analyzed. - Highlights: • Time dependent theory of two-step absorption in the three-level model is proposed. • Two different light pulses with arbitrary duration is observed. • The time dependent perturbation theory is applied without “door–window” approach. • The time and spectral behavior of the spectra is analyzed for several cases.

  12. Symplectic integrators with adaptive time steps

    Science.gov (United States)

    Richardson, A. S.; Finn, J. M.

    2012-01-01

    In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.

  13. A positive and multi-element conserving time stepping scheme for biogeochemical processes in marine ecosystem models

    Science.gov (United States)

    Radtke, H.; Burchard, H.

    2015-01-01

    In this paper, an unconditionally positive and multi-element conserving time stepping scheme for systems of non-linearly coupled ODE's is presented. These systems of ODE's are used to describe biogeochemical transformation processes in marine ecosystem models. The numerical scheme is a positive-definite modification of the Runge-Kutta method, it can have arbitrarily high order of accuracy and does not require time step adaption. If the scheme is combined with a modified Patankar-Runge-Kutta method from Burchard et al. (2003), it also gets the ability to solve a certain class of stiff numerical problems, but the accuracy is restricted to second-order then. The performance of the new scheme on two test case problems is shown.

  14. Intake flow and time step analysis in the modeling of a direct injection Diesel engine

    Energy Technology Data Exchange (ETDEWEB)

    Zancanaro Junior, Flavio V.; Vielmo, Horacio A. [Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Mechanical Engineering Dept.], E-mails: zancanaro@mecanica.ufrgs.br, vielmoh@mecanica.ufrgs.br

    2010-07-01

    This paper discusses the effects of the time step on turbulence flow structure in the intake and in-cylinder systems of a Diesel engine during the intake process, under the motored condition. The three-dimensional modeling of a reciprocating engine geometry comprising a bowl-in-piston combustion chamber, intake port of shallow ramp helical type and exhaust port of conventional type. The equations are numerically solved, including a transient analysis, valves and piston movements, for engine speed of 1500 rpm, using a commercial Finite Volumes CFD code. A parallel computation is employed. For the purpose of examining the in-cylinder turbulence characteristics two parameters are observed: the discharge coefficient and swirl ratio. This two parameters quantify the fluid flow characteristics inside cylinder in the intake stroke, therefore, it is very important their study and understanding. Additionally, the evolution of the discharge coefficient and swirl ratio, along crank angle, are correlated and compared, with the objective of clarifying the physical mechanisms. Regarding the turbulence, computations are performed with the Eddy Viscosity Model k-u SST, in its Low-Reynolds approaches, with standard near wall treatment. The system of partial differential equations to be solved consists of the Reynolds-averaged compressible Navier-Stokes equations with the constitutive relations for an ideal gas, and using a segregated solution algorithm. The enthalpy equation is also solved. A moving hexahedral trimmed mesh independence study is presented. In the same way many convergence tests are performed, and a secure criterion established. The results of the pressure fields are shown in relation to vertical plane that passes through the valves. Areas of low pressure can be seen in the valve curtain region, due to strong jet flows. Also, it is possible to note divergences between the time steps, mainly for the smaller time step. (author)

  15. Time step MOTA thermostat simulation

    International Nuclear Information System (INIS)

    Guthrie, G.L.

    1978-09-01

    The report details the logic, program layout, and operating procedures for the time-step MOTA (Materials Open Test Assembly) thermostat simulation program known as GYRD. It will enable prospective users to understand the operation of the program, run it, and interpret the results. The time-step simulation analysis was the approach chosen to determine the maximum value gain that could be used to minimize steady temperature offset without risking undamped thermal oscillations. The advantage of the GYRD program is that it directly shows hunting, ringing phenomenon, and similar events. Programs BITT and CYLB are faster, but do not directly show ringing time

  16. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    Science.gov (United States)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  17. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  18. Time step size selection for radiation diffusion calculations

    International Nuclear Information System (INIS)

    Rider, W.J.; Knoll, D.A.

    1999-01-01

    The purpose of this note is to describe a time step control technique as applied to radiation diffusion. Standard practice only provides a heuristic criteria related to the relative change in the dependent variables. The authors propose an alternative based on relatively simple physical principles. This time step control applies to methods of solution that are unconditionally stable and converges nonlinearities within a time step in the governing equations. Commonly, nonlinearities in the governing equations are evaluated using existing (old time) data. The authors refer to this as the semi-implicit (SI) method. When a method converges nonlinearities within a time step, the entire governing equation including all nonlinearities is self-consistently evaluated using advance time data (with appropriate time centering for accuracy)

  19. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.

    2010-01-01

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  20. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  1. Development of a real time activity monitoring Android application utilizing SmartStep.

    Science.gov (United States)

    Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward

    2016-08-01

    Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.

  2. Diffeomorphic image registration with automatic time-step adjustment

    DEFF Research Database (Denmark)

    Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst

    2015-01-01

    In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required to r...... accuracy as a fixed time-step scheme however at a much less computational cost....

  3. Studies on steps affecting tritium residence time in solid blanket

    International Nuclear Information System (INIS)

    Tanaka, Satoru

    1987-01-01

    For the self sustaining of CTR fuel cycle, the effective tritium recovery from blankets is essential. This means that not only tritium breeding ratio must be larger than 1.0, but also high recovering speed is required for the short residence time of tritium in blankets. Short residence time means that the tritium inventory in blankets is small. In this paper, the tritium residence time and tritium inventory in a solid blanket are modeled by considering the steps constituting tritium release. Some of these tritium migration processes were experimentally evaluated. The tritium migration steps in a solid blanket using sintered breeding materials consist of diffusion in grains, desorption at grain edges, diffusion and permeation through grain boundaries, desorption at particle edges, diffusion and percolation through interconnected pores to purging stream, and convective mass transfer to stream. Corresponding to these steps, diffusive, soluble, adsorbed and trapped tritium inventories and the tritium in gas phase are conceivable. The code named TTT was made for calculating these tritium inventories and the residence time of tritium. An example of the results of calculation is shown. The blanket is REPUTER-1, which is the conceptual design of a commercial reversed field pinch fusion reactor studied at the University of Tokyo. The experimental studies on the migration steps of tritium are reported. (Kako, I.)

  4. Calibration and Evaluation of Different Estimation Models of Daily Solar Radiation in Seasonally and Annual Time Steps in Shiraz Region

    Directory of Open Access Journals (Sweden)

    Hamid Reza Fooladmand

    2017-06-01

    2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region

  5. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Science.gov (United States)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  6. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    International Nuclear Information System (INIS)

    Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.

    2017-01-01

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determined to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.

  7. An explicit multi-time-stepping algorithm for aerodynamic flows

    OpenAIRE

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  8. Monte Carlo steps per spin vs. time in the master equation II: Glauber kinetics for the infinite-range ising model in a static magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Suhk Kun [Chungbuk National University, Chungbuk (Korea, Republic of)

    2006-01-15

    As an extension of our previous work on the relationship between time in Monte Carlo simulation and time in the continuous master equation in the infinit-range Glauber kinetic Ising model in the absence of any magnetic field, we explored the same model in the presence of a static magnetic field. Monte Carlo steps per spin as time in the MC simulations again turns out to be proportional to time in the master equation for the model in relatively larger static magnetic fields at any temperature. At and near the critical point in a relatively smaller magnetic field, the model exhibits a significant finite-size dependence, and the solution to the Suzuki-Kubo differential equation stemming from the master equation needs to be re-scaled to fit the Monte Carlo steps per spin for the system with different numbers of spins.

  9. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  10. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  11. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  12. Modeling the stepping mechanism in negative lightning leaders

    Science.gov (United States)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  13. The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays

    Energy Technology Data Exchange (ETDEWEB)

    Gadella, M. [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain); Kuru, Ş. [Department of Physics, Faculty of Science, Ankara University, 06100 Ankara (Turkey); Negro, J., E-mail: jnegro@fta.uva.es [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain)

    2017-04-15

    We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays for the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.

  14. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  15. HIA, the next step: Defining models and roles

    International Nuclear Information System (INIS)

    Putters, Kim

    2005-01-01

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways and responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking

  16. An application of the time-step topological model for three-phase transformer no-load current calculation considering hysteresis

    International Nuclear Information System (INIS)

    Carrander, Claes; Mousavi, Seyed Ali; Engdahl, Göran

    2017-01-01

    In many transformer applications, it is necessary to have a core magnetization model that takes into account both magnetic and electrical effects. This becomes particularly important in three-phase transformers, where the zero-sequence impedance is generally high, and therefore affects the magnetization very strongly. In this paper, we demonstrate a time-step topological simulation method that uses a lumped-element approach to accurately model both the electrical and magnetic circuits. The simulation method is independent of the used hysteresis model. In this paper, a hysteresis model based on the first-order reversal-curve has been used. - Highlights: • A lumped-element method for modelling transformers i demonstrated. • The method can include hysteresis and arbitrarily complex geometries. • Simulation results for one power transformer are compared to measurements. • An analytical curve-fitting expression for static hysteresis loops is shown.

  17. Development and validation of a local time stepping-based PaSR solver for combustion and radiation modeling

    DEFF Research Database (Denmark)

    Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad

    2013-01-01

    In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...

  18. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  19. Genetic demixing and evolution in linear stepping stone models

    Science.gov (United States)

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.

    2010-04-01

    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial

  20. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  1. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  2. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  3. Diffraction model of a step-out transition

    Energy Technology Data Exchange (ETDEWEB)

    Chao, A.W.; Zimmermann, F.

    1996-06-01

    The diffraction model of a cavity, suggested by Lawson, Bane and Sands is generalized to a step out transition. Using this model, the high frequency impedance is calculated explicitly for the case that the transition step is small compared with the beam pipe radius. In the diffraction model for a small step out transition, the total energy is conserved, but, unlike the cavity case, the diffracted waves in the geometric shadow and the pipe region, in general, do not always carry equal energy. In the limit of small step sizes, the impedance derived from the diffraction model agrees with that found by Balakin, Novokhatsky and also Kheifets. This impedance can be used to compute the wake field of a round collimator whose half aperture is much larger than the bunch length, as existing in the SLC final focus.

  4. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Science.gov (United States)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  5. Implementation of Real-Time Machining Process Control Based on Fuzzy Logic in a New STEP-NC Compatible System

    Directory of Open Access Journals (Sweden)

    Po Hu

    2016-01-01

    Full Text Available Implementing real-time machining process control at shop floor has great significance on raising the efficiency and quality of product manufacturing. A framework and implementation methods of real-time machining process control based on STEP-NC are presented in this paper. Data model compatible with ISO 14649 standard is built to transfer high-level real-time machining process control information between CAPP systems and CNC systems, in which EXPRESS language is used to define new STEP-NC entities. Methods for implementing real-time machining process control at shop floor are studied and realized on an open STEP-NC controller, which is developed using object-oriented, multithread, and shared memory technologies conjunctively. Cutting force at specific direction of machining feature in side mill is chosen to be controlled object, and a fuzzy control algorithm with self-adjusting factor is designed and embedded in the software CNC kernel of STEP-NC controller. Experiments are carried out to verify the proposed framework, STEP-NC data model, and implementation methods for real-time machining process control. The results of experiments prove that real-time machining process control tasks can be interpreted and executed correctly by the STEP-NC controller at shop floor, in which actual cutting force is kept around ideal value, whether axial cutting depth changes suddenly or continuously.

  6. The construction of geological model using an iterative approach (Step 1 and Step 2)

    International Nuclear Information System (INIS)

    Matsuoka, Toshiyuki; Kumazaki, Naoki; Saegusa, Hiromitsu; Sasaki, Keiichi; Endo, Yoshinobu; Amano, Kenji

    2005-03-01

    One of the main goals of the Mizunami Underground Research Laboratory (MIU) Project is to establish appropriate methodologies for reliably investigating and assessing the deep subsurface. This report documents the results of geological modeling of Step 1 and Step 2 using the iterative investigation approach at the site-scale (several 100m to several km in area). For the Step 1 model, existing information (e.g. literature), and results from geological mapping and reflection seismic survey were used. For the Step 2 model, additional information obtained from the geological investigation using existing borehole and the shallow borehole investigation were incorporated. As a result of this study, geological elements that should be represented in the model were defined, and several major faults with trends of NNW, EW and NE trend were identified (or inferred) in the vicinity of the MIU-site. (author)

  7. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-01

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings

  8. BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics

    Science.gov (United States)

    Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.

    2010-12-01

    BIOMAP simulates competition between two Plant Functional Types (PFT) at any given point in the conterminous U.S. using a time series of daily temperature (mean, minimum, maximum), precipitation, humidity, light and nutrients, with PFT-specific rooting within a multi-layer soil. The model employs a 2-layer canopy biophysics, Farquhar photosynthesis, the Beer-Lambert Law for light attenuation and a mechanistic soil hydrology. In essence, BIOMAP is a re-built version of the biogeochemistry model, BIOME-BGC, into the form of the MAPSS biogeography model. Specific enhancements are: 1) the 2-layer canopy biophysics of Dolman (1993); 2) the unique MAPSS-based hydrology, which incorporates canopy evaporation, snow dynamics, infiltration and saturated and unsaturated percolation with ‘fast’ flow and base flow and a ‘tunable aquifer’ capacity, a metaphor of D’Arcy’s Law; and, 3) a unique MAPSS-based stomatal conductance algorithm, which simultaneously incorporates vapor pressure and soil water potential constraints, based on physiological information and many other improvements. Over small domains the PFTs can be parameterized as individual species to investigate fundamental vs. potential niche theory; while, at more coarse scales the PFTs can be rendered as more general functional groups. Since all of the model processes are intrinsically leaf to plot scale (physiology to PFT competition), it essentially has no ‘intrinsic’ scale and can be implemented on a grid of any size, taking on the characteristics defined by the homogeneous climate of each grid cell. Currently, the model is implemented on the VEMAP 1/2 degree, daily grid over the conterminous U.S. Although both the thermal and water-limited ecotones are dynamic, following climate variability, the PFT distributions remain fixed. Thus, the model is currently being fitted with a ‘reproduction niche’ to allow full dynamic operation as a Dynamic General Vegetation Model (DGVM). While global simulations

  9. A permeation theory for single-file ion channels: one- and two-step models.

    Science.gov (United States)

    Nelson, Peter Hugo

    2011-04-28

    How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no

  10. Coherent states for the time dependent harmonic oscillator: the step function

    International Nuclear Information System (INIS)

    Moya-Cessa, Hector; Fernandez Guasti, Manuel

    2003-01-01

    We study the time evolution for the quantum harmonic oscillator subjected to a sudden change of frequency. It is based on an approximate analytic solution to the time dependent Ermakov equation for a step function. This approach allows for a continuous treatment that differs from former studies that involve the matching of two time independent solutions at the time when the step occurs

  11. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    Science.gov (United States)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for

  12. Stepwise hydrogeological modeling and groundwater flow analysis on site scale (Step 0 and Step 1)

    International Nuclear Information System (INIS)

    Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori

    2005-05-01

    One of the main goals of the Mizunami Underground Research Laboratory Project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. To achieve this goal, a variety of investigations, analysis, and evaluations have been conducted using an iterative approach. In this study, hydrogeological modeling and ground water flow analyses have been carried out using the data from surface-based investigations at Step 0 and Step 1, in order to synthesize the investigation results, to evaluate the uncertainty of the hydrogeological model, and to specify items for further investigation. The results of this study are summarized as follows: 1) As the investigation progresses Step 0 to Step 1, the understanding of groundwater flow was enhanced from Step 0 to Step 1, and the hydrogeological model could be revised, 2) The importance of faults as major groundwater flow pathways was demonstrated, 3) Geological and hydrogeological characteristics of faults with orientation of NNW and NE were shown to be especially significant. The main item specified for further investigations is summarized as follows: geological and hydrogeological characteristics of NNW and NE trending faults are important. (author)

  13. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    Science.gov (United States)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  14. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  15. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  16. The association between choice stepping reaction time and falls in older adults--a path analysis model

    NARCIS (Netherlands)

    Pijnappels, M.A.G.M.; Delbaere, K.; Sturnieks, D.L.; Lord, S.R.

    2010-01-01

    Background: choice stepping reaction time (CSRT) is a functional measure that has been shown to significantly discriminate older fallers from non-fallers. Objective: to investigate how physiological and cognitive factors mediate the association between CSRT performance and multiple falls by use of

  17. Testing a stepped care model for binge-eating disorder: a two-step randomized controlled trial.

    Science.gov (United States)

    Tasca, Giorgio A; Koszycki, Diana; Brugnera, Agostino; Chyurlia, Livia; Hammond, Nicole; Francis, Kylie; Ritchie, Kerri; Ivanova, Iryna; Proulx, Genevieve; Wilson, Brian; Beaulac, Julie; Bissada, Hany; Beasley, Erin; Mcquaid, Nancy; Grenon, Renee; Fortin-Langelier, Benjamin; Compare, Angelo; Balfour, Louise

    2018-05-24

    A stepped care approach involves patients first receiving low-intensity treatment followed by higher intensity treatment. This two-step randomized controlled trial investigated the efficacy of a sequential stepped care approach for the psychological treatment of binge-eating disorder (BED). In the first step, all participants with BED (n = 135) received unguided self-help (USH) based on a cognitive-behavioral therapy model. In the second step, participants who remained in the trial were randomized either to 16 weeks of group psychodynamic-interpersonal psychotherapy (GPIP) (n = 39) or to a no-treatment control condition (n = 46). Outcomes were assessed for USH in step 1, and then for step 2 up to 6-months post-treatment using multilevel regression slope discontinuity models. In the first step, USH resulted in large and statistically significant reductions in the frequency of binge eating. Statistically significant moderate to large reductions in eating disorder cognitions were also noted. In the second step, there was no difference in change in frequency of binge eating between GPIP and the control condition. Compared with controls, GPIP resulted in significant and large improvement in attachment avoidance and interpersonal problems. The findings indicated that a second step of a stepped care approach did not significantly reduce binge-eating symptoms beyond the effects of USH alone. The study provided some evidence for the second step potentially to reduce factors known to maintain binge eating in the long run, such as attachment avoidance and interpersonal problems.

  18. ADDING A NEW STEP WITH SPATIAL AUTOCORRELATION TO IMPROVE THE FOUR-STEP TRAVEL DEMAND MODEL WITH FEEDBACK FOR A DEVELOPING CITY

    Directory of Open Access Journals (Sweden)

    Xuesong FENG, Ph.D Candidate

    2009-01-01

    Full Text Available It is expected that improvement of transport networks could give rise to the change of spatial distributions of population-related factors and car ownership, which are expected to further influence travel demand. To properly reflect such an interdependence mechanism, an aggregate multinomial logit (A-MNL model was firstly applied to represent the spatial distributions of these exogenous variables of the travel demand model by reflecting the influence of transport networks. Next, the spatial autocorrelation analysis is introduced into the log-transformed A-MNL model (called SPA-MNL model. Thereafter, the SPA-MNL model is integrated into the four-step travel demand model with feedback (called 4-STEP model. As a result, an integrated travel demand model is newly developed and named as the SPA-STEP model. Using person trip data collected in Beijing, the performance of the SPA-STEP model is empirically compared with the 4-STEP model. It was proven that the SPA-STEP model is superior to the 4-STEP model in accuracy; most of the estimated parameters showed statistical differences in values. Moreover, though the results of the simulations to the same set of assumed scenarios by the 4-STEP model and the SPA-STEP model consistently suggested the same sustainable path for the future development of Beijing, it was found that the environmental sustainability and the traffic congestion for these scenarios were generally overestimated by the 4-STEP model compared with the corresponding analyses by the SPA-STEP model. Such differences were clearly generated by the introduction of the new modeling step with spatial autocorrelation.

  19. Steps of Supercritical Fluid Extraction of Natural Products and Their Characteristic Times

    OpenAIRE

    Sovová, H. (Helena)

    2012-01-01

    Kinetics of supercritical fluid extraction (SFE) from plants is variable due to different micro-structure of plants and their parts, different properties of extracted substances and solvents, and different flow patterns in the extractor. Variety of published mathematical models for SFE of natural products corresponds to this diversification. This study presents simplified equations of extraction curves in terms of characteristic times of four single extraction steps: internal diffusion, exter...

  20. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  1. Factors affecting GEBV accuracy with single-step Bayesian models.

    Science.gov (United States)

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  2. Evaluating Bank Profitability in Ghana: A five step Du-Pont Model Approach

    Directory of Open Access Journals (Sweden)

    Baah Aye Kusi

    2015-09-01

    Full Text Available We investigate bank profitability in Ghana using periods before, during and after the globe financial crises with the five step du-pont model for the first time.We adapt the variable of the five step du-pont model to explain bank profitability with a panel data of twenty-five banks in Ghana from 2006 to 2012. To ensure meaningful generalization robust errors fixed and random effects models are used.Our empirical results suggests that bank operating activities (operating profit margin, bank efficiency (asset turnover, bank leverage (asset to equity and financing cost (interest burden  were positive and significant determinants of bank profitability (ROE during the period of study implying that bank in Ghana can boost return to equity holders through the above mentioned variables. We further report that the five step du-pont model better explains the total variation (94% in bank profitability in Ghana as compared to earlier findings suggesting that bank specific variables are keen in explaining ROE in banks in Ghana.We cited no empirical study that has employed five step du-pont model making our study unique and different from earlier studies as we assert that bank specific variables are core to explaining bank profitability.                

  3. Stabilization of a three-dimensional limit cycle walking model through step-to-step ankle control.

    Science.gov (United States)

    Kim, Myunghee; Collins, Steven H

    2013-06-01

    Unilateral, below-knee amputation is associated with an increased risk of falls, which may be partially related to a loss of active ankle control. If ankle control can contribute significantly to maintaining balance, even in the presence of active foot placement, this might provide an opportunity to improve balance using robotic ankle-foot prostheses. We investigated ankle- and hip-based walking stabilization methods in a three-dimensional model of human gait that included ankle plantarflexion, ankle inversion-eversion, hip flexion-extension, and hip ad/abduction. We generated discrete feedback control laws (linear quadratic regulators) that altered nominal actuation parameters once per step. We used ankle push-off, lateral ankle stiffness and damping, fore-aft foot placement, lateral foot placement, or all of these as control inputs. We modeled environmental disturbances as random, bounded, unexpected changes in floor height, and defined balance performance as the maximum allowable disturbance value for which the model walked 500 steps without falling. Nominal walking motions were unstable, but were stabilized by all of the step-to-step control laws we tested. Surprisingly, step-by-step modulation of ankle push-off alone led to better balance performance (3.2% leg length) than lateral foot placement (1.2% leg length) for these control laws. These results suggest that appropriate control of robotic ankle-foot prosthesis push-off could make balancing during walking easier for individuals with amputation.

  4. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  5. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  6. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1

  7. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    Science.gov (United States)

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  8. Multiple time step integrators in ab initio molecular dynamics

    International Nuclear Information System (INIS)

    Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.

    2014-01-01

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy

  9. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  10. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  11. Biomechanical influences on balance recovery by stepping.

    Science.gov (United States)

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  12. 3D elastic wave modeling using modified high‐order time stepping schemes with improved stability conditions

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.; Seif, Roustam

    2009-01-01

    We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.

  13. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  14. Grief: Difficult Times, Simple Steps.

    Science.gov (United States)

    Waszak, Emily Lane

    This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…

  15. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    Science.gov (United States)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  16. A parallel nearly implicit time-stepping scheme

    OpenAIRE

    Botchev, Mike A.; van der Vorst, Henk A.

    2001-01-01

    Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable. The purpose of this article is to give an overall assessment of the pa...

  17. Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2016-01-01

    Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....

  18. Solving point reactor kinetic equations by time step-size adaptable numerical methods

    International Nuclear Information System (INIS)

    Liao Chaqing

    2007-01-01

    Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)

  19. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  20. Step training in a rat model for complex aneurysmal vascular microsurgery

    Directory of Open Access Journals (Sweden)

    Martin Dan

    2015-12-01

    Full Text Available Introduction: Microsurgery training is a key step for the young neurosurgeons. Both in vascular and peripheral nerve pathology, microsurgical techniques are useful tools for the proper treatment. Many training models have been described, including ex vivo (chicken wings and in vivo (rat, rabbit ones. Complex microsurgery training include termino-terminal vessel anastomosis and nerve repair. The aim of this study was to describe a reproducible complex microsurgery training model in rats. Materials and methods: The experimental animals were Brown Norway male rats between 10-16 weeks (average 13 and weighing between 250-400g (average 320g. We performed n=10 rat hind limb replantations. The surgical steps and preoperative management are carefully described. We evaluated the vascular patency by clinical assessment-color, temperature, capillary refill. The rats were daily inspected for any signs of infections. The nerve regeneration was assessed by foot print method. Results: There were no case of vascular compromise or autophagia. All rats had long term survival (>90 days. The nerve regeneration was clinically completed at 6 months postoperative. The mean operative time was 183 minutes, and ischemia time was 25 minutes.

  1. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  2. Influence of step complexity and presentation style on step performance of computerized emergency operating procedures

    Energy Technology Data Exchange (ETDEWEB)

    Xu Song [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China); Li Zhizhong [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)], E-mail: zzli@tsinghua.edu.cn; Song Fei; Luo Wei; Zhao Qianyi; Salvendy, Gavriel [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)

    2009-02-15

    With the development of information technology, computerized emergency operating procedures (EOPs) are taking the place of paper-based ones. However, ergonomics issues of computerized EOPs have not been studied adequately since the industrial practice is quite limited yet. This study examined the influence of step complexity and presentation style of EOPs on step performance. A simulated computerized EOP system was developed in two presentation styles: Style A: one- and two-dimensional flowcharts combination; Style B: two-dimensional flowchart and success logic tree combination. Step complexity was quantified by a complexity measure model based on an entropy concept. Forty subjects participated in the experiment of EOP execution using the simulated system. The results of data analysis on the experiment data indicate that step complexity and presentation style could significantly influence step performance (both step error rate and operation time). Regression models were also developed. The regression analysis results imply that operation time of a step could be well predicted by step complexity while step error rate could only partly predicted by it. The result of a questionnaire investigation implies that step error rate was influenced not only by the operation task itself but also by other human factors. These findings may be useful for the design and assessment of computerized EOPs.

  3. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    Science.gov (United States)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  4. Finite element time domain modeling of controlled-Source electromagnetic data with a hybrid boundary condition

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin

    2017-01-01

    method which is unconditionally stable. We solve the diffusion equation for the electric field with a total field formulation. The finite element system of equation is solved using the direct method. The solutions of electric field, at different time, can be obtained using the effective time stepping...... method with trivial computation cost once the matrix is factorized. We try to keep the same time step size for a fixed number of steps using an adaptive time step doubling (ATSD) method. The finite element modeling domain is also truncated using a semi-adaptive method. We proposed a new boundary...... condition based on approximating the total field on the modeling boundary using the primary field corresponding to a layered background model. We validate our algorithm using several synthetic model studies....

  5. Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism

    International Nuclear Information System (INIS)

    Stolterfoht, N.

    1993-01-01

    The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)

  6. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    Science.gov (United States)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  7. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    Science.gov (United States)

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  8. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    Science.gov (United States)

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  9. Linear system identification via backward-time observer models

    Science.gov (United States)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  10. The large discretization step method for time-dependent partial differential equations

    Science.gov (United States)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  11. STEP - Product Model Data Sharing and Exchange

    DEFF Research Database (Denmark)

    Kroszynski, Uri

    1998-01-01

    During the last fifteen years, a very large effort to standardize the product models employed in product design, manufacturing and other life-cycle phases has been undertaken. This effort has the acronym STEP, and resulted in the International Standard ISO-10303 "Industrial Automation Systems...... - Product Data Representation and Exchange", featuring at present some 30 released parts, and growing continuously. Many of the parts are Application Protocols (AP). This article presents an overview of STEP, based upon years of involvement in three ESPRIT projects, which contributed to the development...

  12. Enriching step-based product information models to support product life-cycle activities

    Science.gov (United States)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  13. On an adaptive time stepping strategy for solving nonlinear diffusion equations

    International Nuclear Information System (INIS)

    Chen, K.; Baines, M.J.; Sweby, P.K.

    1993-01-01

    A new time step selection procedure is proposed for solving non- linear diffusion equations. It has been implemented in the ASWR finite element code of Lorenz and Svoboda [10] for 2D semiconductor process modelling diffusion equations. The strategy is based on equi-distributing the local truncation errors of the numerical scheme. The use of B-splines for interpolation (as well as for the trial space) results in a banded and diagonally dominant matrix. The approximate inverse of such a matrix can be provided to a high degree of accuracy by another banded matrix, which in turn can be used to work out the approximate finite difference scheme corresponding to the ASWR finite element method, and further to calculate estimates of the local truncation errors of the numerical scheme. Numerical experiments on six full simulation problems arising in semiconductor process modelling have been carried out. Results show that our proposed strategy is more efficient and better conserves the total mass. 18 refs., 6 figs., 2 tabs

  14. Modelling a New Product Model on the Basis of an Existing STEP Application Protocol

    Directory of Open Access Journals (Sweden)

    B.-R. Hoehn

    2005-01-01

    Full Text Available During the last years a great range of computer aided tools has been generated to support the development process of various products. The goal of a continuous data flow, needed for high efficiency, requires powerful standards for the data exchange. At the FZG (Gear Research Centre of the Technical University of Munich there was a need for a common gear data format for data exchange between gear calculation programs. The STEP standard ISO 10303 was developed for this type of purpose, but a suitable definition of gear data was still missing, even in the Application Protocol AP 214, developed for the design process in the automotive industry. The creation of a new STEP Application Protocol or the extension of existing protocol would be a very time consumpting normative process. So a new method was introduced by FZG. Some very general definitions of an Application Protocol (here AP 214 were used to determine rules for an exact specification of the required kind of data. In this case a product model for gear units was defined based on elements of the AP 214. Therefore no change of the Application Protocol is necessary. Meanwhile the product model for gear units has been published as a VDMA paper and successfully introduced for data exchange within the German gear industry associated with FVA (German Research Organisation for Gears and Transmissions. This method can also be adopted for other applications not yet sufficiently defined by STEP

  15. Real-time modeling of heat distributions

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  16. Long memory of financial time series and hidden Markov models with time-varying parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    Hidden Markov models are often used to capture stylized facts of daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior for the ability to reproduce the stylized...... facts have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time-varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared...... daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step predictions....

  17. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  18. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  19. Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    H. Vincent Poor

    2008-05-01

    Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.

  20. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  1. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    Science.gov (United States)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  2. Some Comments on the Behavior of the RELAP5 Numerical Scheme at Very Small Time Steps

    International Nuclear Information System (INIS)

    Tiselj, Iztok; Cerne, Gregor

    2000-01-01

    The behavior of the RELAP5 code at very short time steps is described, i.e., δt [approximately equal to] 0.01 δx/c. First, the property of the RELAP5 code to trace acoustic waves with 'almost' second-order accuracy is demonstrated. Quasi-second-order accuracy is usually achieved for acoustic waves at very short time steps but can never be achieved for the propagation of nonacoustic temperature and void fraction waves. While this feature may be beneficial for the simulations of fast transients describing pressure waves, it also has an adverse effect: The lack of numerical diffusion at very short time steps can cause typical second-order numerical oscillations near steep pressure jumps. This behavior explains why an automatic halving of the time step, which is used in RELAP5 when numerical difficulties are encountered, in some cases leads to the failure of the simulation.Second, the integration of the stiff interphase exchange terms in RELAP5 is studied. For transients with flashing and/or rapid condensation as the main phenomena, results strongly depend on the time step used. Poor accuracy is achieved with 'normal' time steps (δt [approximately equal to] δx/v) because of the very short characteristic timescale of the interphase mass and heat transfer sources. In such cases significantly different results are predicted with very short time steps because of the more accurate integration of the stiff interphase exchange terms

  3. Multi-time-step domain coupling method with energy control

    DEFF Research Database (Denmark)

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....

  4. Sharing Steps in the Workplace: Changing Privacy Concerns Over Time

    DEFF Research Database (Denmark)

    Jensen, Nanna Gorm; Shklovski, Irina

    2016-01-01

    study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers...

  5. A Straightforward Convergence Method for ICCG Simulation of Multiloop and Time-Stepping FE Model of Synchronous Generators with Simultaneous AC and Rectified DC Connections

    Directory of Open Access Journals (Sweden)

    Shanming Wang

    2015-01-01

    Full Text Available Now electric machines integrate with power electronics to form inseparable systems in lots of applications for high performance. For such systems, two kinds of nonlinearities, the magnetic nonlinearity of iron core and the circuit nonlinearity caused by power electronics devices, coexist at the same time, which makes simulation time-consuming. In this paper, the multiloop model combined with FE model of AC-DC synchronous generators, as one example of electric machine with power electronics system, is set up. FE method is applied for magnetic nonlinearity and variable-step variable-topology simulation method is applied for circuit nonlinearity. In order to improve the simulation speed, the incomplete Cholesky conjugate gradient (ICCG method is used to solve the state equation. However, when power electronics device switches off, the convergence difficulty occurs. So a straightforward approach to achieve convergence of simulation is proposed. At last, the simulation results are compared with the experiments.

  6. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  7. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    Science.gov (United States)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  8. Specification of a STEP Based Reference Model for Exchange of Robotics Models

    DEFF Research Database (Denmark)

    Haenisch, Jochen; Kroszynski, Uri; Ludwig, Arnold

    robot programming, the descriptions of geometry, kinematics, robotics, dynamics, and controller data using STEP are addressed as major goals of the project.The Project Consortium has now released the "Specificatin of a STEP Based Reference Model for Exchange of Robotics Models" on which a series......ESPRIT Project 6457: "Interoperability of Standards for Robotics in CIME" (InterRob) belongs to the Subprogram "Computer Integrated Manufacturing and Engineering" of ESPRIT, the European Specific Programme for Research and Development in Information Technology supported by the European Commision....... InterRob aims to develop an integrated solution to precision manufacturing by combining product data and database technologies with robotic off-line programming and simulation. Benefits arise from the use of high level simulation tools and developing standards for the exchange of product model data...

  9. Genomic prediction in a nuclear population of layers using single-step models.

    Science.gov (United States)

    Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning

    2018-02-01

    Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.

  10. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  11. Modeling imperfectly repaired system data via grey differential equations with unequal-gapped times

    International Nuclear Information System (INIS)

    Guo Renkuan

    2007-01-01

    In this paper, we argue that grey differential equation models are useful in repairable system modeling. The arguments starts with the review on GM(1,1) model with equal- and unequal-spaced stopping time sequence. In terms of two-stage GM(1,1) filtering, system stopping time can be partitioned into system intrinsic function and repair effect. Furthermore, we propose an approach to use grey differential equation to specify a semi-statistical membership function for system intrinsic function times. Also, we engage an effort to use GM(1,N) model to model system stopping times and the associated operating covariates and propose an unequal-gapped GM(1,N) model for such analysis. Finally, we investigate the GM(1,1)-embed systematic grey equation system modeling of imperfectly repaired system operating data. Practical examples are given in step-by-step manner to illustrate the grey differential equation modeling of repairable system data

  12. Step-indexed Kripke models over recursive worlds

    DEFF Research Database (Denmark)

    Birkedal, Lars; Reus, Bernhard; Schwinghammer, Jan

    2011-01-01

    worlds that are recursively defined in a category of metric spaces. In this paper, we broaden the scope of this technique from the original domain-theoretic setting to an elementary, operational one based on step indexing. The resulting method is widely applicable and leads to simple, succinct models...

  13. Measuring border delay and crossing times at the US-Mexico border : part II. Step-by-step guidelines for implementing a radio frequency identification (RFID) system to measure border crossing and wait times.

    Science.gov (United States)

    2012-06-01

    The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...

  14. Real time wave forecasting using wind time history and numerical model

    Science.gov (United States)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  15. Displacement in the parameter space versus spurious solution of discretization with large time step

    International Nuclear Information System (INIS)

    Mendes, Eduardo; Letellier, Christophe

    2004-01-01

    In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics

  16. Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET

    Directory of Open Access Journals (Sweden)

    B. Ghahraman

    2016-02-01

    Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0

  17. An Improved Split-Step Wavelet Transform Method for Anomalous Radio Wave Propagation Modelling

    Directory of Open Access Journals (Sweden)

    A. Iqbal

    2014-12-01

    Full Text Available Anomalous tropospheric propagation caused by ducting phenomenon is a major problem in wireless communication. Thus, it is important to study the behavior of radio wave propagation in tropospheric ducts. The Parabolic Wave Equation (PWE method is considered most reliable to model anomalous radio wave propagation. In this work, an improved Split Step Wavelet transform Method (SSWM is presented to solve PWE for the modeling of tropospheric propagation over finite and infinite conductive surfaces. A large number of numerical experiments are carried out to validate the performance of the proposed algorithm. Developed algorithm is compared with previously published techniques; Wavelet Galerkin Method (WGM and Split-Step Fourier transform Method (SSFM. A very good agreement is found between SSWM and published techniques. It is also observed that the proposed algorithm is about 18 times faster than WGM and provide more details of propagation effects as compared to SSFM.

  18. A sandpile model of grain blocking and consequences for sediment dynamics in step-pool streams

    Science.gov (United States)

    Molnar, P.

    2012-04-01

    variability in the system response by the processes of grain blocking and step collapse. The temporal correlation in input and output rates and the number of grains stored in the system at any given time are quantified by spectral analysis and statistics of long-range dependence. Although the model is only conceptually conceived to represent the real processes of step formation and collapse, connections will be made between the modelling results and some field and laboratory data on step-pool systems. The main focus in the discussion will be to demonstrate how even in such a simple model the processes of grain blocking and step collapse may impact the sediment transport rates to the point that certain changes in input are not visible anymore, along the lines of "shredding the signals" proposed by Jerolmack and Paola (2010). The consequences are that the notions of stability and equilibrium, the attribution of cause and effect, and the timescales of process and form in step-pool systems, and perhaps in many other fluvial systems, may have very limited applicability.

  19. Long-wave model for strongly anisotropic growth of a crystal step.

    Science.gov (United States)

    Khenner, Mikhail

    2013-08-01

    A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.

  20. The treatment of climate science in Integrated Assessment Modelling: integration of climate step function response in an energy system integrated assessment model.

    Science.gov (United States)

    Dessens, Olivier

    2016-04-01

    Integrated Assessment Models (IAMs) are used as crucial inputs to policy-making on climate change. These models simulate aspect of the economy and climate system to deliver future projections and to explore the impact of mitigation and adaptation policies. The IAMs' climate representation is extremely important as it can have great influence on future political action. The step-function-response is a simple climate model recently developed by the UK Met Office and is an alternate method of estimating the climate response to an emission trajectory directly from global climate model step simulations. Good et al., (2013) have formulated a method of reconstructing general circulation models (GCMs) climate response to emission trajectories through an idealized experiment. This method is called the "step-response approach" after and is based on an idealized abrupt CO2 step experiment results. TIAM-UCL is a technology-rich model that belongs to the family of, partial-equilibrium, bottom-up models, developed at University College London to represent a wide spectrum of energy systems in 16 regions of the globe (Anandarajah et al. 2011). The model uses optimisation functions to obtain cost-efficient solutions, in meeting an exogenously defined set of energy-service demands, given certain technological and environmental constraints. Furthermore, it employs linear programming techniques making the step function representation of the climate change response adapted to the model mathematical formulation. For the first time, we have introduced the "step-response approach" method developed at the UK Met Office in an IAM, the TIAM-UCL energy system, and we investigate the main consequences of this modification on the results of the model in term of climate and energy system responses. The main advantage of this approach (apart from the low computational cost it entails) is that its results are directly traceable to the GCM involved and closely connected to well-known methods of

  1. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    International Nuclear Information System (INIS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-01-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  2. Stepping Stones through Time

    Directory of Open Access Journals (Sweden)

    Emily Lyle

    2012-03-01

    Full Text Available Indo-European mythology is known only through written records but it needs to be understood in terms of the preliterate oral-cultural context in which it was rooted. It is proposed that this world was conceptually organized through a memory-capsule consisting of the current generation and the three before it, and that there was a system of alternate generations with each generation taking a step into the future under the leadership of a white or red king.

  3. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte-Carlo simulations

    Science.gov (United States)

    Patrone, Paul; Einstein, T. L.; Margetis, Dionisios

    2011-03-01

    We study a 1+1D, stochastic, Burton-Cabrera-Frank (BCF) model of interacting steps fluctuating on a vicinal crystal. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. Our goal is to formulate and validate a self-consistent mean-field (MF) formalism to approximately solve the system of coupled, nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. We derive formulas for the time-dependent terrace width distribution (TWD) and its steady-state limit. By comparison with kinetic Monte-Carlo simulations, we show that our MF formalism improves upon models in which step interactions are linearized. We also indicate how fitting parameters of our steady state MF TWD may be used to determine the mass transport regime and step interaction energy of certain experimental systems. PP and TLE supported by NSF MRSEC under Grant DMR 05-20471 at U. of Maryland; DM supported by NSF under Grant DMS 08-47587.

  4. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  5. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  6. Modelling nematode movement using time-fractional dynamics.

    Science.gov (United States)

    Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M

    2007-09-07

    We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.

  7. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    Science.gov (United States)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  8. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  9. Assessment of radiopacity of restorative composite resins with various target distances and exposure times and a modified aluminum step wedge

    Energy Technology Data Exchange (ETDEWEB)

    Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)

    2012-09-15

    ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.

  10. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location

    Science.gov (United States)

    Bancroft, Matthew J.; Day, Brian L.

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body’s momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait. PMID:28066208

  11. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location.

    Science.gov (United States)

    Bancroft, Matthew J; Day, Brian L

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.

  12. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    International Nuclear Information System (INIS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  13. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  14. Positivity-preserving dual time stepping schemes for gas dynamics

    Science.gov (United States)

    Parent, Bernard

    2018-05-01

    A new approach at discretizing the temporal derivative of the Euler equations is here presented which can be used with dual time stepping. The temporal discretization stencil is derived along the lines of the Cauchy-Kowalevski procedure resulting in cross differences in spacetime but with some novel modifications which ensure the positivity of the discretization coefficients. It is then shown that the so-obtained spacetime cross differences result in changes to the wave speeds and can thus be incorporated within Roe or Steger-Warming schemes (with and without reconstruction-evolution) simply by altering the eigenvalues. The proposed approach is advantaged over alternatives in that it is positivity-preserving for the Euler equations. Further, it yields monotone solutions near discontinuities while exhibiting a truncation error in smooth regions less than the one of the second- or third-order accurate backward-difference-formula (BDF) for either small or large time steps. The high resolution and positivity preservation of the proposed discretization stencils are independent of the convergence acceleration technique which can be set to multigrid, preconditioning, Jacobian-free Newton-Krylov, block-implicit, etc. Thus, the current paper also offers the first implicit integration of the time-accurate Euler equations that is positivity-preserving in the strict sense (that is, the density and temperature are guaranteed to remain positive). This is in contrast to all previous positivity-preserving implicit methods which only guaranteed the positivity of the density, not of the temperature or pressure. Several stringent reacting and inert test cases confirm the positivity-preserving property of the proposed method as well as its higher resolution and higher computational efficiency over other second-order and third-order implicit temporal discretization strategies.

  15. Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes

    Science.gov (United States)

    Octova, A.; Sule, R.

    2018-04-01

    Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.

  16. One-step lowrank wave extrapolation

    KAUST Repository

    Sindi, Ghada Atif

    2014-01-01

    Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a modified one-step lowrank wave extrapolation using Shanks transform in isotropic, and anisotropic media. Specifically, we utilize a velocity gradient term to add to the accuracy of the phase approximation function in the spectral implementation. With the higher accuracy, we can utilize larger time steps and make the extrapolation more efficient. Applications to models with strong inhomogeneity and considerable anisotropy demonstrates the utility of the approach.

  17. One-step electrodeposition process of CuInSe2: Deposition time effect

    Indian Academy of Sciences (India)

    Administrator

    CuInSe2 thin films were prepared by one-step electrodeposition process using a simplified two- electrodes system. ... homojunctions or heterojunctions (Rincon et al 1983). Efficiency of ... deposition times onto indium thin oxide (ITO)-covered.

  18. Age-related differences in lower-limb force-time relation during the push-off in rapid voluntary stepping.

    Science.gov (United States)

    Melzer, I; Krasovsky, T; Oddsson, L I E; Liebermann, D G

    2010-12-01

    This study investigated the force-time relationship during the push-off stage of a rapid voluntary step in young and older healthy adults, to study the assumption that when balance is lost a quick step may preserve stability. The ability to achieve peak propulsive force within a short time is critical for the performance of such a quick powerful step. We hypothesized that older adults would achieve peak force and power in significantly longer times compared to young people, particularly during the push-off preparatory phase. Fifteen young and 15 older volunteers performed rapid forward steps while standing on a force platform. Absolute anteroposterior and body weight normalized vertical forces during the push-off in the preparation and swing phases were used to determine time to peak and peak force, and step power. Two-way analyses of variance ('Group' [young-older] by 'Phase' [preparation-swing]) were used to assess our hypothesis (P ≤ 0.05). Older people exerted lower peak forces (anteroposterior and vertical) than young adults, but not necessarily lower peak power. More significantly, they showed a longer time to peak force, particularly in the vertical direction during the preparation phase. Older adults generate propulsive forces slowly and reach lower magnitudes, mainly during step preparation. The time to achieve a peak force and power, rather than its actual magnitude, may account for failures in quickly performing a preventive action. Such delay may be associated with the inability to react and recruit muscles quickly. Thus, training elderly to step fast in response to relevant cues may be beneficial in the prevention of falls. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Jonás D.

    2010-04-01

    We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.

  20. Physiological and cognitive mediators for the association between self-reported depressed mood and impaired choice stepping reaction time in older people.

    NARCIS (Netherlands)

    Kvelde, T.; Pijnappels, M.A.G.M.; Delbaere, K.; Close, J.C.; Lord, S.R.

    2010-01-01

    Background. The aim of the study was to use path analysis to test a theoretical model proposing that the relationship between self-reported depressed mood and choice stepping reaction time (CSRT) is mediated by psychoactive medication use, physiological performance, and cognitive ability.A total of

  1. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mather, Barry

    2017-08-24

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.

  2. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  3. Elderly fallers enhance dynamic stability through anticipatory postural adjustments during a choice stepping reaction time

    Directory of Open Access Journals (Sweden)

    Romain Tisserand

    2016-11-01

    Full Text Available In the case of disequilibrium, the capacity to step quickly is critical to avoid falling for elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT, where elderly fallers (F take longer to step than elderly non-fallers (NF. However, reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA that elderly F develop in a stepping context and their consequences on the dynamic stability. 44 community-dwelling elderly subjects (20 F and 22 NF performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP; in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall.

  4. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    Science.gov (United States)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  5. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    Science.gov (United States)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the

  6. Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations

    Directory of Open Access Journals (Sweden)

    H. Nakajima

    2006-01-01

    Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.

  7. Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations

    Science.gov (United States)

    Nakajima, H.; Stadler, A. T.

    2006-10-01

    Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.

  8. The Relaxation of Vicinal (001) with ZigZag [110] Steps

    Science.gov (United States)

    Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.

    2012-02-01

    This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.

  9. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2009-01-01

    an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second

  10. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  11. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  12. A Step-indexed Semantic Model of Types for the Call-by-Name Lambda Calculus

    OpenAIRE

    Meurer, Benedikt

    2011-01-01

    Step-indexed semantic models of types were proposed as an alternative to purely syntactic safety proofs using subject-reduction. Building upon the work by Appel and others, we introduce a generalized step-indexed model for the call-by-name lambda calculus. We also show how to prove type safety of general recursion in our call-by-name model.

  13. How to Use the Actor-Partner Interdependence Model (APIM To Estimate Different Dyadic Patterns in MPLUS: A Step-by-Step Tutorial

    Directory of Open Access Journals (Sweden)

    Fitzpatrick, Josée

    2016-01-01

    Full Text Available Dyadic data analysis with distinguishable dyads assesses the variance, not only between dyads, but also within the dyad when members are distinguishable on a known variable. In past research, the Actor-Partner Interdependence Model (APIM has been the statistical model of choice in order to take into account this interdependence. Although this method has received considerable interest in the past decade, to our knowledge, no specific guide or tutorial exists to describe how to test an APIM model. In order to close this gap, this article will provide researchers with a step-by-step tutorial for assessing the most recent advancements of the APIM with the use of structural equation modeling (SEM. The present tutorial will also utilize the statistical program MPLUS.

  14. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Joná s D.; Sen, Mrinal K.

    2010-01-01

    popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM

  15. Sharp Penalty Term and Time Step Bounds for the Interior Penalty Discontinuous Galerkin Method for Linear Hyperbolic Problems

    NARCIS (Netherlands)

    Geevers, Sjoerd; van der Vegt, J.J.W.

    2017-01-01

    We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured

  16. Rotordynamic analysis for stepped-labyrinth gas seals using moody's friction-factor model

    International Nuclear Information System (INIS)

    Ha, Tae Woong

    2001-01-01

    The governing equations are derived for the analysis of a stepped labyrinth gas seal generally used in high performance compressors, gas turbines, and steam turbines. The bulk-flow is assumed for a single cavity control volume set up in a stepped labyrinth cavity and the flow is assumed to be completely turbulent in the circumferential direction. The Moody's wall-friction-factor model is used for the calculation of wall shear stresses in the single cavity control volume. For the reaction force developed by the stepped labyrinth gas seal, linearized zeroth-order and first-order perturbation equations are developed for small motion about a centered position. Integration of the resultant first-order pressure distribution along and around the seal defines the rotordynamic coefficients of the stepped labyrinth gas seal. The resulting leakage and rotordynamic characteristics of the stepped labyrinth gas seal are presented and compared with Scharrer's theoretical analysis using Blasius' wall-friction-factor model. The present analysis shows a good qualitative agreement of leakage characteristics with Scharrer's analysis, but underpredicts by about 20 %. For the rotordynamic coefficients, the present analysis generally yields smaller predicted values compared with Scharrer's analysis

  17. Numerical characterisation of one-step and three-step solar air heating collectors used for cocoa bean solar drying.

    Science.gov (United States)

    Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel; La Madrid, Raúl

    2017-12-01

    In the northern coastal and jungle areas of Peru, cocoa beans are dried using artisan methods, such as direct exposure to sunlight. This traditional process is time intensive, leading to a reduction in productivity and, therefore, delays in delivery times. The present study was intended to numerically characterise the thermal behaviour of three configurations of solar air heating collectors in order to determine which demonstrated the best thermal performance under several controlled operating conditions. For this purpose, a computational fluid dynamics model was developed to describe the simultaneous convective and radiative heat transfer phenomena under several operation conditions. The constructed computational fluid dynamics model was firstly validated through comparison with the data measurements of a one-step solar air heating collector. We then simulated two further three-step solar air heating collectors in order to identify which demonstrated the best thermal performance in terms of outlet air temperature and thermal efficiency. The numerical results show that under the same solar irradiation area of exposition and operating conditions, the three-step solar air heating collector with the collector plate mounted between the second and third channels was 67% more thermally efficient compared to the one-step solar air heating collector. This is because the air exposition with the surface of the collector plate for the three-step solar air heating collector former device was twice than the one-step solar air heating collector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Statistical modelling of space-time processes with application to wind power

    DEFF Research Database (Denmark)

    Lenzi, Amanda

    . This thesis aims at contributing to the wind power literature by building and evaluating new statistical techniques for producing forecasts at multiple locations and lead times using spatio-temporal information. By exploring the features of a rich portfolio of wind farms in western Denmark, we investigate...... propose spatial models for predicting wind power generation at two different time scales: for annual average wind power generation and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial...

  19. Comparison of Different Turbulence Models for Numerical Simulation of Pressure Distribution in V-Shaped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Zhaoliang Bai

    2017-01-01

    Full Text Available V-shaped stepped spillway is a new shaped stepped spillway, and the pressure distribution is quite different from that of the traditional stepped spillway. In this paper, five turbulence models were used to simulate the pressure distribution in the skimming flow regimes. Through comparing with the physical value, the realizable k-ε model had better precision in simulating the pressure distribution. Then, the flow pattern of V-shaped and traditional stepped spillways was given to illustrate the unique pressure distribution using realizable k-ε turbulence model.

  20. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    Science.gov (United States)

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  1. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  2. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  3. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  4. Time step size limitation introduced by the BSSN Gamma Driver

    Energy Technology Data Exchange (ETDEWEB)

    Schnetter, Erik, E-mail: schnetter@cct.lsu.ed [Department of Physics and Astronomy, Louisiana State University, LA (United States)

    2010-08-21

    Many mesh refinement simulations currently performed in numerical relativity counteract instabilities near the outer boundary of the simulation domain either by changes to the mesh refinement scheme or by changes to the gauge condition. We point out that the BSSN Gamma Driver gauge condition introduces a time step size limitation in a similar manner as a Courant-Friedrichs-Lewy condition, but which is independent of the spatial resolution. We give a didactic explanation of this issue, show why, especially, mesh refinement simulations suffer from it, and point to a simple remedy. (note)

  5. Numerical Investigation of Transitional Flow over a Backward Facing Step Using a Low Reynolds Number k-ε Model

    DEFF Research Database (Denmark)

    Skovgaard, M.; Nielsen, Peter V.

    In this paper it is investigated if it is possible to simulate and capture some of the low Reynolds number effects numerically using time averaged momentum equations and a low Reynolds number k-f model. The test case is the larninar to turbulent transitional flow over a backward facing step...

  6. A journey of a thousand miles begins with one small step - human agency, hydrological processes and time in socio-hydrology

    Science.gov (United States)

    Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.

    2014-04-01

    When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.

  7. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    Science.gov (United States)

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  8. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    Science.gov (United States)

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  9. Color Shift Modeling of Light-Emitting Diode Lamps in Step-Loaded Stress Testing

    OpenAIRE

    Cai, Miao; Yang, Daoguo; Huang, J.; Zhang, Maofen; Chen, Xianping; Liang, Caihang; Koh, S.W.; Zhang, G.Q.

    2017-01-01

    The color coordinate shift of light-emitting diode (LED) lamps is investigated by running three stress-loaded testing methods, namely step-up stress accelerated degradation testing, step-down stress accelerated degradation testing, and constant stress accelerated degradation testing. A power model is proposed as the statistical model of the color shift (CS) process of LED products. Consequently, a CS mechanism constant is obtained for detecting the consistency of CS mechanisms among various s...

  10. Dance-the-Music: an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Science.gov (United States)

    Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc

    2012-12-01

    In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.

  11. Problem Resolution through Electronic Mail: A Five-Step Model.

    Science.gov (United States)

    Grandgenett, Neal; Grandgenett, Don

    2001-01-01

    Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…

  12. Development and evaluation of a real-time one step Reverse-Transcriptase PCR for quantitation of Chandipura Virus

    Directory of Open Access Journals (Sweden)

    Tandale Babasaheb V

    2008-12-01

    Full Text Available Abstract Background Chandipura virus (CHPV, a member of family Rhabdoviridae was attributed to an explosive outbreak of acute encephalitis in children in Andhra Pradesh, India in 2003 and a small outbreak among tribal children from Gujarat, Western India in 2004. The case-fatality rate ranged from 55–75%. Considering the rapid progression of the disease and high mortality, a highly sensitive method for quantifying CHPV RNA by real-time one step reverse transcriptase PCR (real-time one step RT-PCR using TaqMan technology was developed for rapid diagnosis. Methods Primers and probe for P gene were designed and used to standardize real-time one step RT-PCR assay for CHPV RNA quantitation. Standard RNA was prepared by PCR amplification, TA cloning and run off transcription. The optimized real-time one step RT-PCR assay was compared with the diagnostic nested RT-PCR and different virus isolation systems [in vivo (mice in ovo (eggs, in vitro (Vero E6, PS, RD and Sand fly cell line] for the detection of CHPV. Sensitivity and specificity of real-time one step RT-PCR assay was evaluated with diagnostic nested RT-PCR, which is considered as a gold standard. Results Real-time one step RT-PCR was optimized using in vitro transcribed (IVT RNA. Standard curve showed linear relationship for wide range of 102-1010 (r2 = 0.99 with maximum Coefficient of variation (CV = 5.91% for IVT RNA. The newly developed real-time RT-PCR was at par with nested RT-PCR in sensitivity and superior to cell lines and other living systems (embryonated eggs and infant mice used for the isolation of the virus. Detection limit of real-time one step RT-PCR and nested RT-PCR was found to be 1.2 × 100 PFU/ml. RD cells, sand fly cells, infant mice, and embryonated eggs showed almost equal sensitivity (1.2 × 102 PFU/ml. Vero and PS cell-lines (1.2 × 103 PFU/ml were least sensitive to CHPV infection. Specificity of the assay was found to be 100% when RNA from other viruses or healthy

  13. Models for microtubule cargo transport coupling the Langevin equation to stochastic stepping motor dynamics: Caring about fluctuations.

    Science.gov (United States)

    Bouzat, Sebastián

    2016-01-01

    One-dimensional models coupling a Langevin equation for the cargo position to stochastic stepping dynamics for the motors constitute a relevant framework for analyzing multiple-motor microtubule transport. In this work we explore the consistence of these models focusing on the effects of the thermal noise. We study how to define consistent stepping and detachment rates for the motors as functions of the local forces acting on them in such a way that the cargo velocity and run-time match previously specified functions of the external load, which are set on the base of experimental results. We show that due to the influence of the thermal fluctuations this is not a trivial problem, even for the single-motor case. As a solution, we propose a motor stepping dynamics which considers memory on the motor force. This model leads to better results for single-motor transport than the approaches previously considered in the literature. Moreover, it gives a much better prediction for the stall force of the two-motor case, highly compatible with the experimental findings. We also analyze the fast fluctuations of the cargo position and the influence of the viscosity, comparing the proposed model to the standard one, and we show how the differences on the single-motor dynamics propagate to the multiple motor situations. Finally, we find that the one-dimensional character of the models impede an appropriate description of the fast fluctuations of the cargo position at small loads. We show how this problem can be solved by considering two-dimensional models.

  14. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  15. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  16. Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng

    2014-04-01

    A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.

  17. Constructing an exposure chart: step by step (based on standard procedures)

    International Nuclear Information System (INIS)

    David, Jocelyn L; Cansino, Percedita T.; Taguibao, Angileo P.

    2000-01-01

    An exposure chart is very important in conducting radiographic inspection of materials. By using an accurate exposure chart, an inspector is able to avoid a trial and error way of determining correct time to expose a specimen, thereby producing a radiograph that has an acceptable density based on a standard. The chart gives the following information: x-ray machine model and brand, distance of the x-ray tube from the film, type and thickness of intensifying screens, film type, radiograph density, and film processing conditions. The methods of preparing an exposure chart are available in existing radiographic testing manuals. These described methods are presented in step by step procedures, covering the actual laboratory set-up, data gathering, computations, and transformation of derived data into Characteristic Curve and Exposure Chart

  18. Stepping reaction time and gait adaptability are significantly impaired in people with Parkinson's disease: Implications for fall risk.

    Science.gov (United States)

    Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C

    2018-02-01

    Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Modeling multivariate time series on manifolds with skew radial basis functions.

    Science.gov (United States)

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  20. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  1. An adaptive spatio-temporal smoothing model for estimating trends and step changes in disease risk

    OpenAIRE

    Rushworth, Alastair; Lee, Duncan; Sarran, Christophe

    2014-01-01

    Statistical models used to estimate the spatio-temporal pattern in disease\\ud risk from areal unit data represent the risk surface for each time period with known\\ud covariates and a set of spatially smooth random effects. The latter act as a proxy\\ud for unmeasured spatial confounding, whose spatial structure is often characterised by\\ud a spatially smooth evolution between some pairs of adjacent areal units while other\\ud pairs exhibit large step changes. This spatial heterogeneity is not c...

  2. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    Science.gov (United States)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  3. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  4. Compact Two-step Laser Time-of-Flight Mass Spectrometer for in Situ Analyses of Aromatic Organics on Planetary Missions

    Science.gov (United States)

    Getty, Stephanie; Brickerhoff, William; Cornish, Timothy; Ecelberger, Scott; Floyd, Melissa

    2012-01-01

    RATIONALE A miniature time-of-flight mass spectrometer has been adapted to demonstrate two-step laser desorption-ionization (LOI) in a compact instrument package for enhanced organics detection. Two-step LDI decouples the desorption and ionization processes, relative to traditional laser ionization-desorption, in order to produce low-fragmentation conditions for complex organic analytes. Tuning UV ionization laser energy allowed control ofthe degree of fragmentation, which may enable better identification of constituent species. METHODS A reflectron time-of-flight mass spectrometer prototype measuring 20 cm in length was adapted to a two-laser configuration, with IR (1064 nm) desorption followed by UV (266 nm) postionization. A relatively low ion extraction voltage of 5 kV was applied at the sample inlet. Instrument capabilities and performance were demonstrated with analysis of a model polycyclic aromatic hydrocarbon, representing a class of compounds important to the fields of Earth and planetary science. RESULTS L2MS analysis of a model PAH standard, pyrene, has been demonstrated, including parent mass identification and the onset o(tunable fragmentation as a function of ionizing laser energy. Mass resolution m/llm = 380 at full width at half-maximum was achieved which is notable for gas-phase ionization of desorbed neutrals in a highly-compact mass analyzer. CONCLUSIONS Achieving two-step laser mass spectrometry (L2MS) in a highly-miniature instrument enables a powerful approach to the detection and characterization of aromatic organics in remote terrestrial and planetary applications. Tunable detection of parent and fragment ions with high mass resolution, diagnostic of molecular structure, is possible on such a compact L2MS instrument. Selectivity of L2MS against low-mass inorganic salt interferences is a key advantage when working with unprocessed, natural samples, and a mechanism for the observed selectivity is presented.

  5. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  6. Oncology Modeling for Fun and Profit! Key Steps for Busy Analysts in Health Technology Assessment.

    Science.gov (United States)

    Beca, Jaclyn; Husereau, Don; Chan, Kelvin K W; Hawkins, Neil; Hoch, Jeffrey S

    2018-01-01

    In evaluating new oncology medicines, two common modeling approaches are state transition (e.g., Markov and semi-Markov) and partitioned survival. Partitioned survival models have become more prominent in oncology health technology assessment processes in recent years. Our experience in conducting and evaluating models for economic evaluation has highlighted many important and practical pitfalls. As there is little guidance available on best practices for those who wish to conduct them, we provide guidance in the form of 'Key steps for busy analysts,' who may have very little time and require highly favorable results. Our guidance highlights the continued need for rigorous conduct and transparent reporting of economic evaluations regardless of the modeling approach taken, and the importance of modeling that better reflects reality, which includes better approaches to considering plausibility, estimating relative treatment effects, dealing with post-progression effects, and appropriate characterization of the uncertainty from modeling itself.

  7. Sensor response monitoring in pressurized water reactors using time series modeling

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Kerlin, T.W.

    1978-01-01

    Random data analysis in nuclear power reactors for purposes of process surveillance, pattern recognition and monitoring of temperature, pressure, flow and neutron sensors has gained increasing attention in view of their potential for helping to ensure safe plant operation. In this study, application of autoregressive moving-average (ARMA) time series modeling for monitoring temperature sensor response characteristrics is presented. The ARMA model is used to estimate the step and ramp response of the sensors and the related time constant and ramp delay time. The ARMA parameters are estimated by a two-stage algorithm in the spectral domain. Results of sensor testing for an operating pressurized water reactor are presented. 16 refs

  8. Modelling and Fixed Step Simulation of a Turbo Charged Diesel Engine

    OpenAIRE

    Ritzén, Jesper

    2003-01-01

    Having an engine model that is accurate but not too complicated is desirable when working with on-board diagnosis or engine control. In this thesis a four state mean value model is introduced. To make the model usable in an on-line automotive application it is discrete and simulated with a fixed step size solver. Modelling is done with simplicity as main object. Some simple static models are also presented. To validate the model measuring is carried out in a Scania R124LB truck with a 12 lit...

  9. One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values

    Directory of Open Access Journals (Sweden)

    Jin Xiao

    2014-01-01

    Full Text Available Scientific customer value segmentation (CVS is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM model. On the one hand, ODCEM integrates the preprocess of missing values and the classification modeling into one step; on the other hand, it utilizes multiple classifiers ensemble technology in constructing the classification models. The empirical results in credit scoring dataset “German” from UCI and the real customer churn prediction dataset “China churn” show that the ODCEM outperforms four commonly used “two-step” models and the ensemble based model LMF and can provide better decision support for market managers.

  10. Microsoft Office professional 2010 step by step

    CERN Document Server

    Cox, Joyce; Frye, Curtis

    2011-01-01

    Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom

  11. The enhancement of time-stepping procedures in SYVAC A/C

    International Nuclear Information System (INIS)

    Broyd, T.W.

    1986-01-01

    This report summarises the work carried out an SYVAC A/C between February and May 1985 aimed at improving the way in which time-stepping procedures are handled. The majority of the work was concerned with three types of problem, viz: i) Long vault release, short geosphere response ii) Short vault release, long geosphere response iii) Short vault release, short geosphere response The report contains details of changes to the logic and structure of SYVAC A/C, as well as the results of code implementation tests. It has been written primarily for members of the UK SYVAC development team, and should not be used or referred to in isolation. (author)

  12. Performance of an attention-demanding task during treadmill walking shifts the noise qualities of step-to-step variation in step width.

    Science.gov (United States)

    Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R

    2018-06-01

    The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p Step width and step width variability increased 19% and five percent, respectively (p step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Arnold tongues and the Devil's Staircase in a discrete-time Hindmarsh–Rose neuron model

    International Nuclear Information System (INIS)

    Felicio, Carolini C.; Rech, Paulo C.

    2015-01-01

    We investigate a three-dimensional discrete-time dynamical system, described by a three-dimensional map derived from a continuous-time Hindmarsh–Rose neuron model by the forward Euler method. For a fixed integration step size, we report a two-dimensional parameter-space for this system, where periodic structures, the so-called Arnold tongues, can be seen with periods organized in a Farey tree sequence. We also report possible modifications in this parameter-space, as a function of the integration step size. - Highlights: • We investigate the parameter-space of a particular 3D map. • Periodic structures, namely Arnold tongues, can be seen there. • They are organized in a Farey tree sequence. • The map was derived from a continuous-time Hindmarsh–Rose neuron model. • The forward Euler method was used for such purpose.

  14. Time-resolved measurements of laser-induced diffusion of CO molecules on stepped Pt(111)-surfaces; Zeitaufgeloeste Untersuchung der laser-induzierten Diffusion von CO-Molekuelen auf gestuften Pt(111)-Oberflaechen

    Energy Technology Data Exchange (ETDEWEB)

    Lawrenz, M.

    2007-10-30

    In the present work the dynamics of CO-molecules on a stepped Pt(111)-surface induced by fs-laser pulses at low temperatures was studied by using laser spectroscopy. In the first part of the work, the laser-induced diffusion for the CO/Pt(111)-system could be demonstrated and modelled successfully for step diffusion. At first, the diffusion of CO-molecules from the step sites to the terrace sites on the surface was traced. The experimentally discovered energy transfer time of 500 fs for this process confirms the assumption of an electronically induced process. In the following it was explained how the experimental results were modelled. A friction coefficient which depends on the electron temperature yields a consistent model, whereas for the understanding of the fluence dependence and time-resolved measurements parallel the same set of parameters was used. Furthermore, the analysis was extended to the CO-terrace diffusion. Small coverages of CO were adsorbed to the terraces and the diffusion was detected as the temporal evolution of the occupation of the step sites acting as traps for the diffusing molecules. The additional performed two-pulse correlation measurements also indicate an electronically induced process. At the substrate temperature of 40 K the cross-correlation - where an energy transfer time of 1.8 ps was extracted - suggests also an electronically induced energy transfer mechanism. Diffusion experiments were performed for different substrate temperatures. (orig.)

  15. Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.

    Science.gov (United States)

    van den Tillaar, Roland; Gamble, Paul

    2018-03-26

    This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2  ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2  ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.

  16. A one-step, real-time PCR assay for rapid detection of rhinovirus.

    Science.gov (United States)

    Do, Duc H; Laus, Stella; Leber, Amy; Marcon, Mario J; Jordan, Jeanne A; Martin, Judith M; Wadowsky, Robert M

    2010-01-01

    One-step, real-time PCR assays for rhinovirus have been developed for a limited number of PCR amplification platforms and chemistries, and some exhibit cross-reactivity with genetically similar enteroviruses. We developed a one-step, real-time PCR assay for rhinovirus by using a sequence detection system (Applied Biosystems; Foster City, CA). The primers were designed to amplify a 120-base target in the noncoding region of picornavirus RNA, and a TaqMan (Applied Biosystems) degenerate probe was designed for the specific detection of rhinovirus amplicons. The PCR assay had no cross-reactivity with a panel of 76 nontarget nucleic acids, which included RNAs from 43 enterovirus strains. Excellent lower limits of detection relative to viral culture were observed for the PCR assay by using 38 of 40 rhinovirus reference strains representing different serotypes, which could reproducibly detect rhinovirus serotype 2 in viral transport medium containing 10 to 10,000 TCID(50) (50% tissue culture infectious dose endpoint) units/ml of the virus. However, for rhinovirus serotypes 59 and 69, the PCR assay was less sensitive than culture. Testing of 48 clinical specimens from children with cold-like illnesses for rhinovirus by the PCR and culture assays yielded detection rates of 16.7% and 6.3%, respectively. For a batch of 10 specimens, the entire assay was completed in 4.5 hours. This real-time PCR assay enables detection of many rhinovirus serotypes with the Applied Biosystems reagent-instrument platform.

  17. Quantummechanical multi-step direct models for nuclear data applications

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-10-01

    Various multi-step direct models have been derived and compared on a theoretical level. Subsequently, these models have been implemented in the computer code system KAPSIES, enabling a consistent comparison on the basis of the same set of nuclear parameters and same set of numerical techniques. Continuum cross sections in the energy region between 10 and several hundreds of MeV have successfully been analysed. Both angular distributions and energy spectra can be predicted in an essentially parameter-free manner. It is demonstrated that the quantum-mechanical MSD models (in particular the FKK model) give an improved prediction of pre-equilibrium angular distributions as compared to the experiment-based systematics of Kalbach. This makes KAPSIES a reliable tool for nuclear data applications in the afore-mentioned energy region. (author). 10 refs., 2 figs

  18. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    International Nuclear Information System (INIS)

    Finn, John M.

    2015-01-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012

  19. Nuclear fusion during yeast mating occurs by a three-step pathway.

    Science.gov (United States)

    Melloy, Patricia; Shen, Shu; White, Erin; McIntosh, J Richard; Rose, Mark D

    2007-11-19

    In Saccharomyces cerevisiae, mating culminates in nuclear fusion to produce a diploid zygote. Two models for nuclear fusion have been proposed: a one-step model in which the outer and inner nuclear membranes and the spindle pole bodies (SPBs) fuse simultaneously and a three-step model in which the three events occur separately. To differentiate between these models, we used electron tomography and time-lapse light microscopy of early stage wild-type zygotes. We observe two distinct SPBs in approximately 80% of zygotes that contain fused nuclei, whereas we only see fused or partially fused SPBs in zygotes in which the site of nuclear envelope (NE) fusion is already dilated. This demonstrates that SPB fusion occurs after NE fusion. Time-lapse microscopy of zygotes containing fluorescent protein tags that localize to either the NE lumen or the nucleoplasm demonstrates that outer membrane fusion precedes inner membrane fusion. We conclude that nuclear fusion occurs by a three-step pathway.

  20. A space-time hybrid hourly rainfall model for derived flood frequency analysis

    Directory of Open Access Journals (Sweden)

    U. Haberlandt

    2008-12-01

    Full Text Available For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series.

    First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in

  1. Real time thermal hydraulic model for high temperature gas-cooled reactor core

    International Nuclear Information System (INIS)

    Sui Zhe; Sun Jun; Ma Yuanle; Zhang Ruipeng

    2013-01-01

    A real-time thermal hydraulic model of the reactor core was described and integrated into the simulation system for the high temperature gas-cooled pebble bed reactor nuclear power plant, which was developed in the vPower platform, a new simulation environment for nuclear and fossil power plants. In the thermal hydraulic model, the helium flow paths were established by the flow network tools in order to obtain the flow rates and pressure distributions. Meanwhile, the heat structures, representing all the solid heat transfer elements in the pebble bed, graphite reflectors and carbon bricks, were connected by the heat transfer network in order to solve the temperature distributions in the reactor core. The flow network and heat transfer network were coupled and calculated in real time. Two steady states (100% and 50% full power) and two transients (inlet temperature step and flow step) were tested that the quantitative comparisons of the steady results with design data and qualitative analysis of the transients showed the good applicability of the present thermal hydraulic model. (authors)

  2. A step-defined sedentary lifestyle index: <5000 steps/day.

    Science.gov (United States)

    Tudor-Locke, Catrine; Craig, Cora L; Thyfault, John P; Spence, John C

    2013-02-01

    Step counting (using pedometers or accelerometers) is widely accepted by researchers, practitioners, and the general public. Given the mounting evidence of the link between low steps/day and time spent in sedentary behaviours, how few steps/day some populations actually perform, and the growing interest in the potentially deleterious effects of excessive sedentary behaviours on health, an emerging question is "How many steps/day are too few?" This review examines the utility, appropriateness, and limitations of using a reoccurring candidate for a step-defined sedentary lifestyle index: 10 000) to lower (sedentary lifestyle index for adults is appropriate for researchers and practitioners and for communicating with the general public. There is little evidence to advocate any specific value indicative of a step-defined sedentary lifestyle index in children and adolescents.

  3. An energy-stable time-integrator for phase-field models

    KAUST Repository

    Vignal, Philippe

    2016-12-27

    We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.

  4. An energy-stable time-integrator for phase-field models

    KAUST Repository

    Vignal, Philippe; Collier, N.; Dalcin, Lisandro; Brown, D.L.; Calo, V.M.

    2016-01-01

    We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.

  5. A stepped leader model for lightning including charge distribution in branched channels

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Wei; Zhang, Li [School of Electrical Engineering, Shandong University, Jinan 250061 (China); Li, Qingmin, E-mail: lqmeee@ncepu.edu.cn [Beijing Key Lab of HV and EMC, North China Electric Power University, Beijing 102206 (China); State Key Lab of Alternate Electrical Power System with Renewable Energy Sources, Beijing 102206 (China)

    2014-09-14

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  6. A stepped leader model for lightning including charge distribution in branched channels

    International Nuclear Information System (INIS)

    Shi, Wei; Zhang, Li; Li, Qingmin

    2014-01-01

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  7. The Value of Step-by-Step Risk Assessment for Unmanned Aircraft

    DEFF Research Database (Denmark)

    La Cour-Harbo, Anders

    2018-01-01

    The new European legislation expected in 2018 or 2019 will introduce a step-by-step process for conducting risk assessments for unmanned aircraft flight operations. This is a relatively simple approach to a very complex challenge. This work compares this step-by-step process to high fidelity risk...... modeling, and shows that at least for a series of example flight missions there is reasonable agreement between the two very different methods....

  8. Comparison of step-by-step kinematics in repeated 30m sprints in female soccer players.

    Science.gov (United States)

    van den Tillaar, Roland

    2018-01-04

    The aim of this study was to compare kinematics in repeated 30m sprints in female soccer players. Seventeen subjects performed seven 30m sprints every 30s in one session. Kinematics were measured with an infrared contact mat and laser gun, and running times with an electronic timing device. The main findings were that sprint times increased in the repeated sprint ability test. The main changes in kinematics during the repeated sprint ability test were increased contact time and decreased step frequency, while no change in step length was observed. The step velocity increased in almost each step until the 14, which occurred around 22m. After this, the velocity was stable until the last step, when it decreased. This increase in step velocity was mainly caused by the increased step length and decreased contact times. It was concluded that the fatigue induced in repeated 30m sprints in female soccer players resulted in decreased step frequency and increased contact time. Employing this approach in combination with a laser gun and infrared mat for 30m makes it very easy to analyse running kinematics in repeated sprints in training. This extra information gives the athlete, coach and sports scientist the opportunity to give more detailed feedback and help to target these changes in kinematics better to enhance repeated sprint performance.

  9. Traffic safety and step-by-step driving licence for young people

    DEFF Research Database (Denmark)

    Tønning, Charlotte; Agerholm, Niels

    2017-01-01

    presents a review of safety effects from step-by-step driving licence schemes. Most of the investigated schemes consist of a step-by-step driving licence with Step 1) various tests and education, Step 2) a period where driving is only allowed together with an experienced driver and Step 3) driving without...... companion is allowed but with various restrictions and, in some cases, additional driving education and tests. In general, a step-by-step driving licence improves traffic safety even though the young people are permitted to drive a car earlier on. The effects from driving with an experienced driver vary......Young novice car drivers are much more accident-prone than other drivers - up to 10 times that of their parents' generation. A central solution to improve the traffic safety for this group is implementation of a step-by-step driving licence. A number of countries have introduced a step...

  10. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    Science.gov (United States)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to

  11. The stepping behavior analysis of pedestrians from different age groups via a single-file experiment

    Science.gov (United States)

    Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang

    2018-03-01

    The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.

  12. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  13. The NIST Step Class Library (Step Into the Future)

    Science.gov (United States)

    1990-09-01

    Figure 6. Excerpt from a STEP exclange file based on the Geometry model 1be NIST STEP Class Libary Page 13 An issue of concern in this...Scheifler, R., Gettys, J., and Newman, P., X Window System: C Library and Protocol Reference. Digital Press, Bedford, Mass, 1988. [Schenck90] Schenck, D

  14. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    Science.gov (United States)

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  15. Takeover times for a simple model of network infection.

    Science.gov (United States)

    Ottino-Löffler, Bertrand; Scott, Jacob G; Strogatz, Steven H

    2017-07-01

    We study a stochastic model of infection spreading on a network. At each time step a node is chosen at random, along with one of its neighbors. If the node is infected and the neighbor is susceptible, the neighbor becomes infected. How many time steps T does it take to completely infect a network of N nodes, starting from a single infected node? An analogy to the classic "coupon collector" problem of probability theory reveals that the takeover time T is dominated by extremal behavior, either when there are only a few infected nodes near the start of the process or a few susceptible nodes near the end. We show that for N≫1, the takeover time T is distributed as a Gumbel distribution for the star graph, as the convolution of two Gumbel distributions for a complete graph and an Erdős-Rényi random graph, as a normal for a one-dimensional ring and a two-dimensional lattice, and as a family of intermediate skewed distributions for d-dimensional lattices with d≥3 (these distributions approach the convolution of two Gumbel distributions as d approaches infinity). Connections to evolutionary dynamics, cancer, incubation periods of infectious diseases, first-passage percolation, and other spreading phenomena in biology and physics are discussed.

  16. Multiple-relaxation-time lattice Boltzmann model for compressible fluids

    International Nuclear Information System (INIS)

    Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun

    2011-01-01

    We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.

  17. Time to pause before the next step

    International Nuclear Information System (INIS)

    Siemon, R.E.

    1998-01-01

    Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy's development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ''demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...'' which are the important ingredients of ITER's objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here

  18. PID controller auto-tuning based on process step response and damping optimum criterion.

    Science.gov (United States)

    Pavković, Danijel; Polak, Siniša; Zorc, Davor

    2014-01-01

    This paper presents a novel method of PID controller tuning suitable for higher-order aperiodic processes and aimed at step response-based auto-tuning applications. The PID controller tuning is based on the identification of so-called n-th order lag (PTn) process model and application of damping optimum criterion, thus facilitating straightforward algebraic rules for the adjustment of both the closed-loop response speed and damping. The PTn model identification is based on the process step response, wherein the PTn model parameters are evaluated in a novel manner from the process step response equivalent dead-time and lag time constant. The effectiveness of the proposed PTn model parameter estimation procedure and the related damping optimum-based PID controller auto-tuning have been verified by means of extensive computer simulations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  19. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  20. Stepping out: dare to step forward, step back, or just stand still and breathe.

    Science.gov (United States)

    Waisman, Mary Sue

    2012-01-01

    It is important to step out and make a difference. We have one of the most unique and diverse professions that allows for diversity in thought and practice, permitting each of us to grow in our unique niches and make significant contributions. I was frightened to 'step out' to go to culinary school at the age of 46, but it changed forever the way I look at my profession and I have since experienced the most enjoyable and innovative career. There are also times when it is important to 'step back' to relish the roots of our profession; to help bring food back into nutrition; to translate all of our wonderful science into a language of food that Canadians understand. We all need to take time to 'just stand still and breathe': to celebrate our accomplishments, reflect on our actions, ensure we are heading toward our vision, keep the profession vibrant and relevant, and cherish one another.

  1. Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.

    Science.gov (United States)

    Rogers, Mark W; Mille, Marie-Laure

    2016-08-15

    Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  2. Step training improves reaction time, gait and balance and reduces falls in older people: a systematic review and meta-analysis.

    Science.gov (United States)

    Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R

    2017-04-01

    To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, prisk ratio=0.51, 95% CI 0.38 to 0.68, pfalls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (pfalls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  3. A simple one-step chemistry model for partially premixed hydrocarbon combustion

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Tarrazo, Eduardo [Instituto Nacional de Tecnica Aeroespacial, Madrid (Spain); Sanchez, Antonio L. [Area de Mecanica de Fluidos, Universidad Carlos III de Madrid, Leganes 28911 (Spain); Linan, Amable [ETSI Aeronauticos, Pl. Cardenal Cisneros 3, Madrid 28040 (Spain); Williams, Forman A. [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093-0411 (United States)

    2006-10-15

    This work explores the applicability of one-step irreversible Arrhenius kinetics with unity reaction order to the numerical description of partially premixed hydrocarbon combustion. Computations of planar premixed flames are used in the selection of the three model parameters: the heat of reaction q, the activation temperature T{sub a}, and the preexponential factor B. It is seen that changes in q with equivalence ratio f need to be introduced in fuel-rich combustion to describe the effect of partial fuel oxidation on the amount of heat released, leading to a universal linear variation q(f) for f>1 for all hydrocarbons. The model also employs a variable activation temperature T{sub a}(f) to mimic changes in the underlying chemistry in rich and very lean flames. The resulting chemistry description is able to reproduce propagation velocities of diluted and undiluted flames accurately over the whole flammability limit. Furthermore, computations of methane-air counterflow diffusion flames are used to test the proposed chemistry under nonpremixed conditions. The model not only predicts the critical strain rate at extinction accurately but also gives near-extinction flames with oxygen leakage, thereby overcoming known predictive limitations of one-step Arrhenius kinetics. (author)

  4. The importance of age composition of 12-step meetings as a moderating factor in the relation between young adults' 12-step participation and abstinence.

    Science.gov (United States)

    Labbe, Allison K; Greene, Claire; Bergman, Brandon G; Hoeppner, Bettina; Kelly, John F

    2013-12-01

    Participation in 12-step mutual help organizations (MHO) is a common continuing care recommendation for adults; however, little is known about the effects of MHO participation among young adults (i.e., ages 18-25 years) for whom the typically older age composition at meetings may serve as a barrier to engagement and benefits. This study examined whether the age composition of 12-step meetings moderated the recovery benefits derived from attending MHOs. Young adults (n=302; 18-24 years; 26% female; 94% White) enrolled in a naturalistic study of residential treatment effectiveness were assessed at intake, and 3, 6, and 12 months later on 12-step attendance, age composition of attended 12-step groups, and treatment outcome (Percent Days Abstinent [PDA]). Hierarchical linear models (HLM) tested the moderating effect of age composition on PDA concurrently and in lagged models controlling for confounds. A significant three-way interaction between attendance, age composition, and time was detected in the concurrent (p=0.002), but not lagged, model (b=0.38, p=0.46). Specifically, a similar age composition was helpful early post-treatment among low 12-step attendees, but became detrimental over time. Treatment and other referral agencies might enhance the likelihood of successful remission and recovery among young adults by locating and initially linking such individuals to age appropriate groups. Once engaged, however, it may be prudent to encourage gradual integration into the broader mixed-age range of 12-step meetings, wherein it is possible that older members may provide the depth and length of sober experience needed to carry young adults forward into long-term recovery. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  6. Bereday and Hilker: Origins of the "Four Steps of Comparison" Model

    Science.gov (United States)

    Adick, Christel

    2018-01-01

    The article draws attention to the forgotten ancestry of the "four steps of comparison" model (description--interpretation--juxtaposition--comparison). Comparativists largely attribute this to George Z. F. Bereday [1964. "Comparative Method in Education." New York: Holt, Rinehart and Winston], but among German scholars, it is…

  7. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  8. On the Convexity of Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2016-01-01

    The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an

  9. Time-series models on somatic cell score improve detection of matistis

    DEFF Research Database (Denmark)

    Norberg, E; Korsgaard, I R; Sloth, K H M N

    2008-01-01

    In-line detection of mastitis using frequent milk sampling was studied in 241 cows in a Danish research herd. Somatic cell scores obtained at a daily basis were analyzed using a mixture of four time-series models. Probabilities were assigned to each model for the observations to belong to a normal...... "steady-state" development, change in "level", change of "slope" or "outlier". Mastitis was indicated from the sum of probabilities for the "level" and "slope" models. Time-series models were based on the Kalman filter. Reference data was obtained from veterinary assessment of health status combined...... with bacteriological findings. At a sensitivity of 90% the corresponding specificity was 68%, which increased to 83% using a one-step back smoothing. It is concluded that mixture models based on Kalman filters are efficient in handling in-line sensor data for detection of mastitis and may be useful for similar...

  10. Using Aspen plus in thermodynamics instruction a step-by-step guide

    CERN Document Server

    Sandler, Stanley I

    2015-01-01

    A step-by-step guide for students (and faculty) on the use of Aspen in teaching thermodynamics Used for a wide variety of important engineering tasks, Aspen Plus software is a modeling tool used for conceptual design, optimization, and performance monitoring of chemical processes. After more than twenty years, it remains one of the most popular and powerful chemical engineering simulation programs used both industrially and academically. Using Aspen Plus in Thermodynamics Instruction: A Step by Step Guide introduces the reader to the use of Aspen Plus in courses in thermodynamics. It prov

  11. Physical modeling of vortical cross-step flow in the American paddlefish, Polyodon spathula

    Science.gov (United States)

    Brooks, Hannah; Haines, Grant E.; Lin, M. Carly

    2018-01-01

    Vortical cross-step filtration in suspension-feeding fish has been reported recently as a novel mechanism, distinct from other biological and industrial filtration processes. Although crossflow passing over backward-facing steps generates vortices that can suspend, concentrate, and transport particles, the morphological factors affecting this vortical flow have not been identified previously. In our 3D-printed models of the oral cavity for ram suspension-feeding fish, the angle of the backward-facing step with respect to the model’s dorsal midline affected vortex parameters significantly, including rotational, tangential, and axial speed. These vortices were comparable to those quantified downstream of the backward-facing steps that were formed by the branchial arches of preserved American paddlefish in a recirculating flow tank. Our data indicate that vortices in cross-step filtration have the characteristics of forced vortices, as the flow of water inside the oral cavity provides the external torque required to sustain forced vortices. Additionally, we quantified a new variable for ram suspension feeding termed the fluid exit ratio. This is defined as the ratio of the total open pore area for water leaving the oral cavity via spaces between branchial arches that are not blocked by gill rakers, divided by the total area for water entering through the gape during ram suspension feeding. Our experiments demonstrated that the fluid exit ratio in preserved paddlefish was a significant predictor of the flow speeds that were quantified anterior of the rostrum, at the gape, directly dorsal of the first ceratobranchial, and in the forced vortex generated by the first ceratobranchial. Physical modeling of vortical cross-step filtration offers future opportunities to explore the complex interactions between structural features of the oral cavity, vortex parameters, motile particle behavior, and particle morphology that determine the suspension, concentration, and

  12. Modelling of epitaxial film growth with an Ehrlich-Schwoebel barrier dependent on the step height

    International Nuclear Information System (INIS)

    Leal, F F; Ferreira, S C; Ferreira, S O

    2011-01-01

    The formation of mounded surfaces in epitaxial growth is attributed to the presence of barriers against interlayer diffusion in the terrace edges, known as Ehrlich-Schwoebel (ES) barriers. We investigate a model for epitaxial growth using an ES barrier explicitly dependent on the step height. Our model has an intrinsic topological step barrier even in the absence of an explicit ES barrier. We show that mounded morphologies can be obtained even for a small barrier while a self-affine growth, consistent with the Villain-Lai-Das Sarma equation, is observed in the absence of an explicit step barrier. The mounded surfaces are described by a super-roughness dynamical scaling characterized by locally smooth (facetted) surfaces and a global roughness exponent α > 1. The thin film limit is featured by surfaces with self-assembled three-dimensional structures having an aspect ratio (height/width) that may increase or decrease with temperature depending on the strength of the step barrier. (fast track communication)

  13. Timing of the steps in transformation of C3H 10T1/2 cells by X-irradiation

    International Nuclear Information System (INIS)

    Kennedy, A.R.; Cairns, J.; Little, J.B.

    1984-01-01

    Transformation of cells in culture by chemical carcinogens or X-rays seems to require at least two steps. The initial step is a frequent event; for example, after transient exposure to either methylcholanthrene or X-rays. It has been hypothesized that the second step behaves like a spontaneous mutation in having a constant but small probability of occurring each time an initiated cell divides. We show here that the clone size distribution of transformed cells in growing cultures initiated by X-rays, is, indeed, exactly what would be expected on that hypothesis. (author)

  14. An Iterative Ensemble Kalman Filter with One-Step-Ahead Smoothing for State-Parameters Estimation of Contaminant Transport Models

    KAUST Repository

    Gharamti, M. E.

    2015-05-11

    The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model

  15. Arnold tongues and the Devil's Staircase in a discrete-time Hindmarsh–Rose neuron model

    Energy Technology Data Exchange (ETDEWEB)

    Felicio, Carolini C., E-mail: carolini.cf@gmail.com; Rech, Paulo C., E-mail: paulo.rech@udesc.br

    2015-11-06

    We investigate a three-dimensional discrete-time dynamical system, described by a three-dimensional map derived from a continuous-time Hindmarsh–Rose neuron model by the forward Euler method. For a fixed integration step size, we report a two-dimensional parameter-space for this system, where periodic structures, the so-called Arnold tongues, can be seen with periods organized in a Farey tree sequence. We also report possible modifications in this parameter-space, as a function of the integration step size. - Highlights: • We investigate the parameter-space of a particular 3D map. • Periodic structures, namely Arnold tongues, can be seen there. • They are organized in a Farey tree sequence. • The map was derived from a continuous-time Hindmarsh–Rose neuron model. • The forward Euler method was used for such purpose.

  16. Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential

    International Nuclear Information System (INIS)

    Zhang Ying; Liang Haozhao; Meng Jie

    2009-01-01

    The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.

  17. Microsoft® Visual Basic® 2010 Step by Step

    CERN Document Server

    Halvorson, Michael

    2010-01-01

    Your hands-on, step-by-step guide to learning Visual Basic® 2010. Teach yourself the essential tools and techniques for Visual Basic® 2010-one step at a time. No matter what your skill level, you'll find the practical guidance and examples you need to start building professional applications for Windows® and the Web. Discover how to: Work in the Microsoft® Visual Studio® 2010 Integrated Development Environment (IDE)Master essential techniques-from managing data and variables to using inheritance and dialog boxesCreate professional-looking UIs; add visual effects and print supportBuild com

  18. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    Directory of Open Access Journals (Sweden)

    Craig Cora L

    2011-06-01

    Full Text Available Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Results Girls had a lower median steps/day (10682 versus 11059 for boys and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys. 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, Discussion Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by promoting excess energy intake. Conclusions In this large nationally representative sample, pedometer-determined steps/day was associated with reduced odds of being obese (but not overweight whereas each parent-reported hour spent watching television between school and dinner increased the odds of both overweight and obesity.

  19. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    Energy Technology Data Exchange (ETDEWEB)

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    2014-09-01

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.

  20. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  1. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  2. Pendekatan Pelatihan On-Site dan Step by Step untuk Optimalisasi Fungsi Guru dalam Pembelajaran

    Directory of Open Access Journals (Sweden)

    Moch. Sholeh Y.A. Ichrom

    2016-02-01

    Full Text Available Remoteness of programme content from teachers' real work situation and unsuitability of approach employed were suspected as main reasons contributing to the failure of many inservise teacher training programmes. A step by step, onsite teacher training (SSOTT model was tried out in this experiment to study if the weakness of inservise programmes could be rectified. As it was tried out in relation with kindergarten mathemathics it was then called SSOTT-MTW (Step by Step Onsite Teacher Training-Methemathics Their Way model. Eighty four kindergartens were involved, in which 84 teachers and 877 pupils were recruited as experimental subjects. The teachers were devided into three group. One group was instructed by using One Period Teacher Training (OPOTT-MTW model, second group was trained with SSOTT-MTW model and the last group was given no training (NTT at all. Result of the experiment showed that the other groups. It was also shown that pupil and parents participation in teaching-learning activities also significantly improved.

  3. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    Science.gov (United States)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  4. Considering dominance in reduced single-step genomic evaluations.

    Science.gov (United States)

    Ertl, J; Edel, C; Pimentel, E C G; Emmerling, R; Götz, K-U

    2018-06-01

    Single-step models including dominance can be an enormous computational task and can even be prohibitive for practical application. In this study, we try to answer the question whether a reduced single-step model is able to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. Genetic values and phenotypes were simulated (500 repetitions) for a small Fleckvieh pedigree consisting of 371 bulls (180 thereof genotyped) and 553 cows (40 thereof genotyped). This pedigree was virtually extended for 2,407 non-genotyped daughters. Genetic values were estimated with the single-step model and with different reduced single-step models. Including more relatives of genotyped cows in the reduced single-step model resulted in a better agreement of results with the single-step model. Accuracies of genetic values were largest with single-step and smallest with reduced single-step when only the cows genotyped were modelled. The results indicate that a reduced single-step model is suitable to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. © 2018 Blackwell Verlag GmbH.

  5. Ecological monitoring in a discrete-time prey-predator model.

    Science.gov (United States)

    Gámez, M; López, I; Rodríguez, C; Varga, Z; Garay, J

    2017-09-21

    The paper is aimed at the methodological development of ecological monitoring in discrete-time dynamic models. In earlier papers, in the framework of continuous-time models, we have shown how a systems-theoretical methodology can be applied to the monitoring of the state process of a system of interacting populations, also estimating certain abiotic environmental changes such as pollution, climatic or seasonal changes. In practice, however, there may be good reasons to use discrete-time models. (For instance, there may be discrete cycles in the development of the populations, or observations can be made only at discrete time steps.) Therefore the present paper is devoted to the development of the monitoring methodology in the framework of discrete-time models of population ecology. By monitoring we mean that, observing only certain component(s) of the system, we reconstruct the whole state process. This may be necessary, e.g., when in a complex ecosystem the observation of the densities of certain species is impossible, or too expensive. For the first presentation of the offered methodology, we have chosen a discrete-time version of the classical Lotka-Volterra prey-predator model. This is a minimal but not trivial system where the methodology can still be presented. We also show how this methodology can be applied to estimate the effect of an abiotic environmental change, using a component of the population system as an environmental indicator. Although this approach is illustrated in a simplest possible case, it can be easily extended to larger ecosystems with several interacting populations and different types of abiotic environmental effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Tax-Optimal Step-Up and Imperfect Loss Offset

    Directory of Open Access Journals (Sweden)

    Markus Diller

    2012-05-01

    Full Text Available In the field of mergers and acquisitions, German and international tax law allow for several opportunities to step up a firm's assets, i.e., to revaluate the assets at fair market values. When a step-up is performed the taxpayer recognizes a taxable gain, but also obtains tax benefits in the form of higher future depreciation allowances associated with stepping up the tax base of the assets. This tax-planning problem is well known in taxation literature and can also be applied to firm valuation in the presence of taxation. However, the known models usually assume a perfect loss offset. If this assumption is abandoned, the depreciation allowances may lose value as they become tax effective at a later point in time, or even never if there are not enough cash flows to be offset against. This aspect is especiallyrelevant if future cash flows are assumed to be uncertain. This paper shows that a step-up may be disadvantageous or a firm overvalued if these aspects are not integrated into the basic calculus. Compared to the standard approach, assets should be stepped up only in a few cases and - under specific conditions - at a later point in time. Firm values may be considerably lower under imperfect loss offset.

  7. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  8. Aerial robot intelligent control method based on back-stepping

    Science.gov (United States)

    Zhou, Jian; Xue, Qian

    2018-05-01

    The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.

  9. A time series model: First-order integer-valued autoregressive (INAR(1))

    Science.gov (United States)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  10. Experimental and modeling study on relation of pedestrian step length and frequency under different headways

    Science.gov (United States)

    Zeng, Guang; Cao, Shuchao; Liu, Chi; Song, Weiguo

    2018-06-01

    It is important to study pedestrian stepping behavior and characteristics for facility design and pedestrian flow study due to pedestrians' bipedal movement. In this paper, data of steps are extracted based on trajectories of pedestrians from a single-file experiment. It is found that step length and step frequency will decrease 75% and 33%, respectively, when global density increases from 0.46 ped/m to 2.28 ped/m. With the increment of headway, they will first increase and then remain constant when the headway is beyond 1.16 m and 0.91 m, respectively. Step length and frequency under different headways can be described well by normal distributions. Meanwhile, relationships between step length and frequency under different headways exist. Step frequency decreases with the increment of step length. However, the decrease tendencies depend on headways as a whole. And there are two decrease tendencies: when the headway is between about 0.6 m and 1.0 m, the decrease rate of the step frequency will increase with the increment of step length; while it will decrease when the headway is beyond about 1.0 m and below about 0.6 m. A model is built based on the experiment results. In fundamental diagrams, the results of simulation agree well with those of experiment. The study can be helpful for understanding pedestrian stepping behavior and designing public facilities.

  11. Smart Wireless Power Transfer Operated by Time-Modulated Arrays via a Two-Step Procedure

    Directory of Open Access Journals (Sweden)

    Diego Masotti

    2015-01-01

    Full Text Available The paper introduces a novel method for agile and precise wireless power transmission operated by a time-modulated array. The unique, almost real-time reconfiguration capability of these arrays is fully exploited by a two-step procedure: first, a two-element time-modulated subarray is used for localization of tagged sensors to be energized; the entire 16-element TMA then provides the power to the detected tags, by exploiting the fundamental and first-sideband harmonic radiation. An investigation on the best array architecture is carried out, showing the importance of the adopted nonlinear/full-wave computer-aided-design platform. Very promising simulated energy transfer performance of the entire nonlinear radiating system is demonstrated.

  12. First Steps Towards AN Integrated Citygml-Based 3d Model of Vienna

    Science.gov (United States)

    Agugiaro, G.

    2016-06-01

    This paper presents and discusses the results regarding the initial steps (selection, analysis, preparation and eventual integration of a number of datasets) for the creation of an integrated, semantic, three-dimensional, and CityGML-based virtual model of the city of Vienna. CityGML is an international standard conceived specifically as information and data model for semantic city models at urban and territorial scale. It is being adopted by more and more cities all over the world. The work described in this paper is embedded within the European Marie-Curie ITN project "Ci-nergy, Smart cities with sustainable energy systems", which aims, among the rest, at developing urban decision making and operational optimisation software tools to minimise non-renewable energy use in cities. Given the scope and scale of the project, it is therefore vital to set up a common, unique and spatio-semantically coherent urban model to be used as information hub for all applications being developed. This paper reports about the experiences done so far, it describes the test area and the available data sources, it shows and exemplifies the data integration issues, the strategies developed to solve them in order to obtain the integrated 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.

  13. Designers workbench: toward real-time immersive modeling

    Science.gov (United States)

    Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu

    2000-05-01

    This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.

  14. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    Science.gov (United States)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    provide inputs for the next ionospheic model time step and then stored in a MySQL database as the first part of the time-specific record. The RMM then performs synchronization of the input times with the current model time, prepares a decision on initialization for the next model time step, and monitors its execution. Then, as soon as the model completes computations for the next time step, RMM visualizes the current model output into various short-term (about 1-2 hours) forecasting products and compares prior results with available ionospheric measurements. The RMM places prepared images into the MySQL database, which can be located on a different computer node, and then proceeds to the next time interval continuing the time-loop. The upper-level interface of this real-time system is the a PHP-based Web site (http://www.arsc.edu/SpaceWeather/new). This site provides general information about the Earth polar and adjacent mid-latitude ionosphere, allows for monitoring of the current developments and short-term forecasts, and facilitates access to the comparisons archive stored in the database.

  15. Modeling single-file diffusion with step fractional Brownian motion and a generalized fractional Langevin equation

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    Single-file diffusion behaves as normal diffusion at small time and as subdiffusion at large time. These properties can be described in terms of fractional Brownian motion with variable Hurst exponent or multifractional Brownian motion. We introduce a new stochastic process called Riemann–Liouville step fractional Brownian motion which can be regarded as a special case of multifractional Brownian motion with a step function type of Hurst exponent tailored for single-file diffusion. Such a step fractional Brownian motion can be obtained as a solution of the fractional Langevin equation with zero damping. Various kinds of fractional Langevin equations and their generalizations are then considered in order to decide whether their solutions provide the correct description of the long and short time behaviors of single-file diffusion. The cases where the dissipative memory kernel is a Dirac delta function, a power-law function and a combination of these functions are studied in detail. In addition to the case where the short time behavior of single-file diffusion behaves as normal diffusion, we also consider the possibility of a process that begins as ballistic motion

  16. On the impacts of coarse-scale models of realistic roughness on a forward-facing step turbulent flow

    International Nuclear Information System (INIS)

    Wu, Yanhua; Ren, Huiying

    2013-01-01

    Highlights: ► Discrete wavelet transform was used to produce coarse-scale models of roughness. ► PIV were performed in a forward-facing step flow with roughness of different scales. ► Impacts of roughness scales on various turbulence statistics were studied. -- Abstract: The present work explores the impacts of the coarse-scale models of realistic roughness on the turbulent boundary layers over forward-facing steps. The surface topographies of different scale resolutions were obtained from a novel multi-resolution analysis using discrete wavelet transform. PIV measurements are performed in the streamwise–wall-normal (x–y) planes at two different spanwise positions in turbulent boundary layers at Re h = 3450 and δ/h = 8, where h is the mean step height and δ is the incoming boundary layer thickness. It was observed that large-scale but low-amplitude roughness scales had small effects on the forward-facing step turbulent flow. For the higher-resolution model of the roughness, the turbulence characteristics within 2h downstream of the steps are observed to be distinct from those over the original realistic rough step at a measurement position where the roughness profile possesses a positive slope immediately after the step’s front. On the other hand, much smaller differences exist in the flow characteristics at the other measurement position whose roughness profile possesses a negative slope following the step’s front

  17. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  18. Development of a three dimensional circulation model based on fractional step method

    Directory of Open Access Journals (Sweden)

    Mazen Abualtayef

    2010-03-01

    Full Text Available A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic and thermodynamic results were predicted. The numerically predicted amplitudes and phase angles were well consistent with the field observations.

  19. Theoretical intercomparison of multi-step direct reaction models and computational intercomparison of multi-step direct reaction models

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-08-01

    In recent years several statistical theories have been developed concerning multistep direct (MSD) nuclear reactions. In addition, dominant in applications is a whole class of semiclassical models that may be subsumed under the heading of 'generalized exciton models'. These are basically MSD-type extensions on top of compound-like concepts. In this report the relationship between their underlying statistical MSD-postulates is highlighted. A command framework is outlined that enables to generate the various MSD theories through assigning statistical properties to different parts of the nuclear Hamiltonian. Then it is shown that distinct forms of nuclear randomness are embodied in the mentioned theories. All these theories appear to be very similar at a qualitative level. In order to explain the high energy-tails and forward-peaked angular distribution typical for particles emitted in MSD reactions, it is imagined that the incident continuum particle stepwise looses its energy and direction in a sequence of collisions, thereby creating new particle-hole pairs in the target system. At each step emission may take place. The statistical aspect comes in because many continuum states are involved in the process. These are supposed to display chaotic behavior, the associated randomness assumption giving rise to important simplifications in the expression for MSD emission cross sections. This picture suggests that mentioned MSD models can be interpreted as a variant of essentially one and the same theory. However, this appears not to be the case. To show this usual MSD distinction within the composite reacting nucleus between the fast continuum particle and the residual interactions, the nucleons of the residual core are to be distinguished from those of the leading particle with the residual system. This distinction will turn out to be crucial to present analysis. 27 refs.; 5 figs.; 1 tab

  20. Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS

    Directory of Open Access Journals (Sweden)

    Nicolas Sommet

    2017-09-01

    Full Text Available This paper aims to introduce multilevel logistic regression analysis in a simple and practical way. First, we introduce the basic principles of logistic regression analysis (conditional probability, logit transformation, odds ratio. Second, we discuss the two fundamental implications of running this kind of analysis with a nested data structure: In multilevel logistic regression, the odds that the outcome variable equals one (rather than zero may vary from one cluster to another (i.e. the intercept may vary and the effect of a lower-level variable may also vary from one cluster to another (i.e. the slope may vary. Third and finally, we provide a simplified three-step “turnkey” procedure for multilevel logistic regression modeling: -Preliminary phase: Cluster- or grand-mean centering variables -Step #1: Running an empty model and calculating the intraclass correlation coefficient (ICC -Step #2: Running a constrained and an augmented intermediate model and performing a likelihood ratio test to determine whether considering the cluster-based variation of the effect of the lower-level variable improves the model fit -Step #3 Running a final model and interpreting the odds ratio and confidence intervals to determine whether data support your hypothesis Command syntax for Stata, R, Mplus, and SPSS are included. These steps will be applied to a study on Justin Bieber, because everybody likes Justin Bieber.1

  1. Minimal features of a computer and its basic software to executs NEPTUNIX 2 numerical step

    International Nuclear Information System (INIS)

    Roux, Pierre.

    1982-12-01

    NEPTUNIX 2 is a package which carries out the simulation of complex processes described by numerous non linear algebro-differential equations. Main features are: non linear or time dependent parameters, implicit form, stiff systems, dynamic change of equations leading to discontinuities on some variables. Thus the mathematical model is built with an equation set F(x,x',t,l) = 0, where t is the independent variable, x' the derivative of x and l an ''algebrized'' logical variable. The NEPTUNIX 2 package is divided into two successive major steps: a non numerical step and a numerical step. The non numerical step must be executed on a series 370 IBM computer or a compatible computer. This step generates a FORTRAN language model picture fitted for the computer carrying out the numerical step. The numerical step consists in building and running a mathematical model simulator. This execution step of NEPTUNIX 2 has been designed in order to be transportable on many computers. The present manual describes minimal features of such host computer used for executing the NEPTUNIX 2 numerical step [fr

  2. Energetics of highly kinked step edges

    NARCIS (Netherlands)

    Zandvliet, Henricus J.W.

    2010-01-01

    We have determined the step edge free energy, the step edge stiffness and dimensionless inverse step edge stiffness of the highly kinked < 010> oriented step on a (001) surface of a simple square lattice within the framework of a solid-on-solid model. We have found an exact expression for the step

  3. Electrochemical model of polyaniline-based memristor with mass transfer step

    International Nuclear Information System (INIS)

    Demin, V.A.; Erokhin, V.V.; Kashkarov, P.K.; Kovalchuk, M.V.

    2015-01-01

    The electrochemical organic memristor with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, such as the new field-programmable gate arrays or the neuromorphic networks capable for learning. In this work a new theoretical model of the polyaniline memristor is presented. The developed model of organic memristor functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device including the mass transfer step of ionic reactants. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment, but also quantitative similarities of the resultant current values. This model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices

  4. Parameter Estimations and Optimal Design of Simple Step-Stress Model for Gamma Dual Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Hamdy Mohamed Salem

    2018-03-01

    Full Text Available This paper considers life-testing experiments and how it is effected by stress factors: namely temperature, electricity loads, cycling rate and pressure. A major type of accelerated life tests is a step-stress model that allows the experimenter to increase stress levels more than normal use during the experiment to see the failure items. The test items are assumed to follow Gamma Dual Weibull distribution. Different methods for estimating the parameters are discussed. These include Maximum Likelihood Estimations and Confidence Interval Estimations which is based on asymptotic normality generate narrow intervals to the unknown distribution parameters with high probability. MathCAD (2001 program is used to illustrate the optimal time procedure through numerical examples.

  5. Pendekatan Pelatihan On-Site dan Step by Step untuk Optimalisasi Fungsi Guru dalam Pembelajaran

    OpenAIRE

    Moch. Sholeh Y.A. Ichrom

    2016-01-01

    Remoteness of programme content from teachers' real work situation and unsuitability of approach employed were suspected as main reasons contributing to the failure of many inservise teacher training programmes. A step by step, onsite teacher training (SSOTT) model was tried out in this experiment to study if the weakness of inservise programmes could be rectified. As it was tried out in relation with kindergarten mathemathics it was then called SSOTT-MTW (Step by Step Onsite Teacher Training...

  6. Pendekatan Pelatihan On-Site Dan Step by Step Untuk Optimalisasi Fungsi Guru Dalam Pembelajaran

    OpenAIRE

    Ichrom, Moch. Sholeh Y.A

    1996-01-01

    Remoteness of programme content from teachers' real work situation and unsuitability of approach employed were suspected as main reasons contributing to the failure of many inservise teacher training programmes. A step by step, onsite teacher training (SSOTT) model was tried out in this experiment to study if the weakness of inservise programmes could be rectified. As it was tried out in relation with kindergarten mathemathics it was then called SSOTT-MTW (Step by Step Onsite Teacher Training...

  7. Effects of varying the step particle distribution on a probabilistic transport model

    International Nuclear Information System (INIS)

    Bouzat, S.; Farengo, R.

    2005-01-01

    The consequences of varying the step particle distribution on a probabilistic transport model, which captures the basic features of transport in plasmas and was recently introduced in Ref. 1 [B. Ph. van Milligen et al., Phys. Plasmas 11, 2272 (2004)], are studied. Different superdiffusive transport mechanisms generated by a family of distributions with algebraic decays (Tsallis distributions) are considered. It is observed that the possibility of changing the superdiffusive transport mechanism improves the flexibility of the model for describing different situations. The use of the model to describe the low (L) and high (H) confinement modes is also analyzed

  8. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    Science.gov (United States)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  9. Fitting and interpreting continuous-time latent Markov models for panel data.

    Science.gov (United States)

    Lange, Jane M; Minin, Vladimir N

    2013-11-20

    Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.

  10. 3D airborne EM modeling based on the spectral-element time-domain (SETD) method

    Science.gov (United States)

    Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.

    2017-12-01

    In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays

  11. Modelling of subcritical free-surface flow over an inclined backward-facing step in a water channel

    Directory of Open Access Journals (Sweden)

    Šulc Jan

    2012-04-01

    Full Text Available The contribution deals with the experimental and numerical modelling of subcritical turbulent flow in an open channel with an inclined backward-facing step. The step with the inclination angle α = 20° was placed in the water channel of the cross-section 200×200 mm. Experiments were carried out by means of the PIV and LDA measuring techniques. Numerical simulations were executed by means of the commercial software ANSYS CFX 12.0. Numerical results obtained for twoequation models and EARSM turbulence model completed by transport equations for turbulent energy and specific dissipation rate were compared with experimental data. The modelling was concentrated particularly on the development of the flow separation and on the corresponding changes of free surface.

  12. Linking pedestrian flow characteristics with stepping locomotion

    Science.gov (United States)

    Wang, Jiayue; Boltes, Maik; Seyfried, Armin; Zhang, Jun; Ziemer, Verena; Weng, Wenguo

    2018-06-01

    While properties of human traffic flow are described by speed, density and flow, the locomotion of pedestrian is based on steps. To relate characteristics of human locomotor system with properties of human traffic flow, this paper aims to connect gait characteristics like step length, step frequency, swaying amplitude and synchronization with speed and density and thus to build a ground for advanced pedestrian models. For this aim, observational and experimental study on the single-file movement of pedestrians at different densities is conducted. Methods to measure step length, step frequency, swaying amplitude and step synchronization are proposed by means of trajectories of the head. Mathematical models for the relations of step length or frequency and speed are evaluated. The problem how step length and step duration are influenced by factors like body height and density is investigated. It is shown that the effect of body height on step length and step duration changes with density. Furthermore, two different types of step in-phase synchronization between two successive pedestrians are observed and the influence of step synchronization on step length is examined.

  13. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. Here we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neutronics-related time step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  14. Multi-step polynomial regression method to model and forecast malaria incidence.

    Directory of Open Access Journals (Sweden)

    Chandrajit Chatterjee

    Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city

  15. Development process and data management of TurnSTEP, a STEP-compliant CNC system for turning

    NARCIS (Netherlands)

    Choi, I.; Suh, S.-H; Kim, K.; Song, M.S.; Jang, M.; Lee, B.-E.

    2006-01-01

    TurnSTEP is one of the earliest STEP-compliant CNC systems for turning. Based on the STEP-NC data model formalized as ISO 14649-12 and 121, it is designed to support intelligent and autonomous control of NC machines for e-manufacturing. The present paper introduces the development process and data

  16. Modeling of Bacillus cereus distribution in pasteurized milk at the time of consumption

    Directory of Open Access Journals (Sweden)

    Ľubomír Valík

    2013-02-01

    Full Text Available Normal 0 21 false false false SK X-NONE X-NONE Modelling of Bacillus cereus distribution, using data from pasteurized milk produced in Slovakia, at the time of consumption was performed in this study. The Modular Process Risk Model (MPRM methodology was applied to over all the consecutive steps in the food chain. The main factors involved in the risk of being exposed to unacceptable levels of B. cereus (model output were the initial density of B. cereus after milk pasteurization, storage temperatures and times (model input. Monte Carlo simulations were used for probability calculation of B. cereus density. By applying the sensitivity analysis influence of the input factors and their threshold values on the final count of B. cereus were determined. The results of the general case exposure assessment indicated that almost 14 % of Tetra Brik cartons can contain > 104 cfu/ml of B. cereus at the temperature distribution taken into account and time of pasteurized milk consumption. doi:10.5219/264

  17. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the

  18. Two-boundary first exit time of Gauss-Markov processes for stochastic modeling of acto-myosin dynamics.

    Science.gov (United States)

    D'Onofrio, Giuseppe; Pirozzi, Enrica

    2017-05-01

    We consider a stochastic differential equation in a strip, with coefficients suitably chosen to describe the acto-myosin interaction subject to time-varying forces. By simulating trajectories of the stochastic dynamics via an Euler discretization-based algorithm, we fit experimental data and determine the values of involved parameters. The steps of the myosin are represented by the exit events from the strip. Motivated by these results, we propose a specific stochastic model based on the corresponding time-inhomogeneous Gauss-Markov and diffusion process evolving between two absorbing boundaries. We specify the mean and covariance functions of the stochastic modeling process taking into account time-dependent forces including the effect of an external load. We accurately determine the probability density function (pdf) of the first exit time (FET) from the strip by solving a system of two non singular second-type Volterra integral equations via a numerical quadrature. We provide numerical estimations of the mean of FET as approximations of the dwell-time of the proteins dynamics. The percentage of backward steps is given in agreement to experimental data. Numerical and simulation results are compared and discussed.

  19. Associations of office workers' objectively assessed occupational sitting, standing and stepping time with musculoskeletal symptoms.

    Science.gov (United States)

    Coenen, Pieter; Healy, Genevieve N; Winkler, Elisabeth A H; Dunstan, David W; Owen, Neville; Moodie, Marj; LaMontagne, Anthony D; Eakin, Elizabeth A; O'Sullivan, Peter B; Straker, Leon M

    2018-04-22

    We examined the association of musculoskeletal symptoms (MSS) with workplace sitting, standing and stepping time, as well as sitting and standing time accumulation (i.e. usual bout duration of these activities), measured objectively with the activPAL3 monitor. Using baseline data from the Stand Up Victoria trial (216 office workers, 14 workplaces), cross-sectional associations of occupational activities with self-reported MSS (low-back, upper and lower extremity symptoms in the last three months) were examined using probit regression, correcting for clustering and adjusting for confounders. Sitting bout duration was significantly (p < 0.05) associated, non-linearly, with MSS, such that those in the middle tertile displayed the highest prevalence of upper extremity symptoms. Other associations were non-significant but sometimes involved large differences in symptom prevalence (e.g. 38%) by activity. Though causation is unclear, these non-linear associations suggest that sitting and its alternatives (i.e. standing and stepping) interact with MSS and this should be considered when designing safe work systems. Practitioner summary: We studied associations of objectively assessed occupational activities with musculoskeletal symptoms in office workers. Workers who accumulated longer sitting bouts reported fewer upper extremity symptoms. Total activity duration was not significantly associated with musculoskeletal symptoms. We underline the importance of considering total volumes and patterns of activity time in musculoskeletal research.

  20. Finite cluster renormalization and new two step renormalization group for Ising model

    International Nuclear Information System (INIS)

    Benyoussef, A.; El Kenz, A.

    1989-09-01

    New types of renormalization group theory using the generalized Callen identities are exploited in the study of the Ising model. Another type of two-step renormalization is proposed. Critical couplings and critical exponents y T and y H are calculated by these methods for square and simple cubic lattices, using different size clusters. (author). 17 refs, 2 tabs

  1. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment.

    Science.gov (United States)

    Bunce, David; Haynes, Becky I; Lord, Stephen R; Gschwind, Yves J; Kochan, Nicole A; Reppermund, Simone; Brodaty, Henry; Sachdev, Perminder S; Delbaere, Kim

    2017-06-01

    Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI) are at high risk of falling, the association between IIV and prospective falls is unknown. We conducted a longitudinal cohort study in cognitively intact (n = 271) and MCI (n = 154) community-dwelling adults aged 70-90 years. IIV was assessed through a variety of measures including simple and choice hand reaction time and choice stepping reaction time tasks (CSRT), the latter administered as a single task and also with a secondary working memory task. Logistic regression did not show an association between IIV on the hand-held tasks and falls. Greater IIV in both CSRT tasks, however, did significantly increase the risk of future falls. This effect was specific to the MCI group, with a stronger effect in persons exhibiting gait, posture, or physiological impairment. The findings suggest that increased stepping IIV may indicate compromised neural circuitry involved in executive function, gait, and posture in persons with MCI increasing their risk of falling. IIV measures have potential to assess neurobiological disturbance underlying physical and cognitive dysfunction in old age, and aid fall risk assessment and routine care in community and health care settings. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  3. A Hybrid Fuzzy Time Series Approach Based on Fuzzy Clustering and Artificial Neural Network with Single Multiplicative Neuron Model

    Directory of Open Access Journals (Sweden)

    Ozge Cagcag Yolcu

    2013-01-01

    Full Text Available Particularly in recent years, artificial intelligence optimization techniques have been used to make fuzzy time series approaches more systematic and improve forecasting performance. Besides, some fuzzy clustering methods and artificial neural networks with different structures are used in the fuzzification of observations and determination of fuzzy relationships, respectively. In approaches considering the membership values, the membership values are determined subjectively or fuzzy outputs of the system are obtained by considering that there is a relation between membership values in identification of relation. This necessitates defuzzification step and increases the model error. In this study, membership values were obtained more systematically by using Gustafson-Kessel fuzzy clustering technique. The use of artificial neural network with single multiplicative neuron model in identification of fuzzy relation eliminated the architecture selection problem as well as the necessity for defuzzification step by constituting target values from real observations of time series. The training of artificial neural network with single multiplicative neuron model which is used for identification of fuzzy relation step is carried out with particle swarm optimization. The proposed method is implemented using various time series and the results are compared with those of previous studies to demonstrate the performance of the proposed method.

  4. Partition-based discrete-time quantum walks

    Science.gov (United States)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  5. Explicit/multi-parametric model predictive control (MPC) of linear discrete-time systems by dynamic and multi-parametric programming

    KAUST Repository

    Kouramas, K.I.

    2011-08-01

    This work presents a new algorithm for solving the explicit/multi- parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques. The algorithm features two key steps: (i) a dynamic programming step, in which the mp-MPC problem is decomposed into a set of smaller subproblems in which only the current control, state variables, and constraints are considered, and (ii) a multi-parametric programming step, in which each subproblem is solved as a convex multi-parametric programming problem, to derive the control variables as an explicit function of the states. The key feature of the proposed method is that it overcomes potential limitations of previous methods for solving multi-parametric programming problems with dynamic programming, such as the need for global optimization for each subproblem of the dynamic programming step. © 2011 Elsevier Ltd. All rights reserved.

  6. 10 Steps to Building an Architecture for Space Surveillance Projects

    Science.gov (United States)

    Gyorko, E.; Barnhart, E.; Gans, H.

    Space surveillance is an increasingly complex task, requiring the coordination of a multitude of organizations and systems, while dealing with competing capabilities, proprietary processes, differing standards, and compliance issues. In order to fully understand space surveillance operations, analysts and engineers need to analyze and break down their operations and systems using what are essentially enterprise architecture processes and techniques. These techniques can be daunting to the first- time architect. This paper provides a summary of simplified steps to analyze a space surveillance system at the enterprise level in order to determine capabilities, services, and systems. These steps form the core of an initial Model-Based Architecting process. For new systems, a well defined, or well architected, space surveillance enterprise leads to an easier transition from model-based architecture to model-based design and provides a greater likelihood that requirements are fulfilled the first time. Both new and existing systems benefit from being easier to manage, and can be sustained more easily using portfolio management techniques, based around capabilities documented in the model repository. The resulting enterprise model helps an architect avoid 1) costly, faulty portfolio decisions; 2) wasteful technology refresh efforts; 3) upgrade and transition nightmares; and 4) non-compliance with DoDAF directives. The Model-Based Architecting steps are based on a process that Harris Corporation has developed from practical experience architecting space surveillance systems and ground systems. Examples are drawn from current work on documenting space situational awareness enterprises. The process is centered on DoDAF 2 and its corresponding meta-model so that terminology is standardized and communicable across any disciplines that know DoDAF architecting, including acquisition, engineering and sustainment disciplines. Each step provides a guideline for the type of data to

  7. Implementation of the frequency dependent line model in a real-time power system simulator

    Directory of Open Access Journals (Sweden)

    Reynaldo Iracheta-Cortez

    2017-09-01

    Full Text Available In this paper is described the implementation of the frequency-dependent line model (FD-Line in a real-time digital power system simulator. The main goal with such development is to describe a general procedure to incorporate new realistic models of power system components in modern real-time simulators based on the Electromagnetic Transients Program (EMTP. In this procedure are described, firstly, the steps to obtain the time domain solution of the differential equations that models the electromagnetic behavior in multi-phase transmission lines with frequency dependent parameters. After, the algorithmic solution of the FD-Line model is implemented in Simulink environment, through an S-function programmed in C language, for running off-line simulations of electromagnetic transients. This implementation allows the free assembling of the FD Line model with any element of the Power System Blockset library and also, it can be used to build any network topology. The main advantage of having a power network built in Simulink is that can be executed in real-time by means of the commercial eMEGAsim simulator. Finally, several simulation cases are presented to validate the accuracy and the real-time performance of the FD-Line model.

  8. Bayesian emulation for optimization in multi-step portfolio decisions

    OpenAIRE

    Irie, Kaoru; West, Mike

    2016-01-01

    We discuss the Bayesian emulation approach to computational solution of multi-step portfolio studies in financial time series. "Bayesian emulation for decisions" involves mapping the technical structure of a decision analysis problem to that of Bayesian inference in a purely synthetic "emulating" statistical model. This provides access to standard posterior analytic, simulation and optimization methods that yield indirect solutions of the decision problem. We develop this in time series portf...

  9. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    Science.gov (United States)

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  10. The cc-bar and bb-bar spectroscopy in the two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.; Kaiserslautern Univ.

    1984-07-01

    We investigate the spectroscopy of the charmonium (cc-bar) and bottonium (bb-bar) bound states in a static flavour independent nonrelativistic quark-antiquark (qq-bar) two-step potential model proposed earlier. Our predictions are in good agreement with experimental data and with other theoretical predictions. (author)

  11. Secondary Special Education. Part I: The "Stepping Stone Model" Designed for Secondary Learning Disabled Students. Part II: Adapting Materials and Curriculum.

    Science.gov (United States)

    Fox, Barbara

    The paper describes the Stepping Stone Model, a model for the remediation and mainstreaming of secondary learning disabled students and the adaptation of curriculum and materials for the model. The Stepping Stone Model is designed to establish the independence of students in the mainstream through content reading. Five areas of concern common to…

  12. Model of cap-dependent translation initiation in sea urchin: a step towards the eukaryotic translation regulation network.

    Science.gov (United States)

    Bellé, Robert; Prigent, Sylvain; Siegel, Anne; Cormier, Patrick

    2010-03-01

    The large and rapid increase in the rate of protein synthesis following fertilization of the sea urchin egg has long been a paradigm of translational control, an important component of the regulation of gene expression in cells. This translational up-regulation is linked to physiological changes that occur upon fertilization and is necessary for entry into first cell division cycle. Accumulated knowledge on cap-dependent initiation of translation makes it suited and timely to start integrating the data into a system view of biological functions. Using a programming environment for system biology coupled with model validation (named Biocham), we have built an integrative model for cap-dependent initiation of translation. The model is described by abstract rules. It contains 51 reactions involved in 74 molecular complexes. The model proved to be coherent with existing knowledge by using queries based on computational tree logic (CTL) as well as Boolean simulations. The model could simulate the change in translation occurring at fertilization in the sea urchin model. It could also be coupled with an existing model designed for cell-cycle control. Therefore, the cap-dependent translation initiation model can be considered a first step towards the eukaryotic translation regulation network.

  13. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic, and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. In this paper, we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neuronic-related time-step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  14. Group sequential designs for stepped-wedge cluster randomised trials.

    Science.gov (United States)

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into

  15. Rotor Cascade Shape Optimization with Unsteady Passing Wakes Using Implicit Dual-Time Stepping and a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Eun Seok Lee

    2003-01-01

    Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.

  16. Randomness in multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    The authors propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. They present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplification of the leading-particle statistics theory

  17. Impact of first-step potential and time on the vertical growth of ZnO nanorods on ITO substrate by two-step electrochemical deposition

    International Nuclear Information System (INIS)

    Kim, Tae Gyoum; Jang, Jin-Tak; Ryu, Hyukhyun; Lee, Won-Jae

    2013-01-01

    Highlights: •We grew vertical ZnO nanorods on ITO substrate using a two-step continuous potential process. •The nucleation for the ZnO nanorods growth was changed by first-step potential and duration. •The vertical ZnO nanorods were well grown when first-step potential was −1.2 V and 10 s. -- Abstract: In this study, we analyzed the growth of ZnO nanorods on an ITO (indium doped tin oxide) substrate by electrochemical deposition using a two-step, continuous potential process. We examined the effect of changing the first-step potential as well as the first-step duration on the morphological, structural and optical properties of ZnO nanorods, measured via using field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD) and photoluminescence (PL), respectively. As a result, vertical ZnO nanorods were grown on ITO substrate without the need for a template when the first-step potential was set to −1.2 V for a duration of 10 s, and the second-step potential was set to −0.7 V for a duration of 1190 s. The ZnO nanorods on this sample showed the highest XRD (0 0 2)/(1 0 0) peak intensity ratio and the highest PL near band edge emission to deep level emission peak intensity ratio (NBE/DLE). In this study, the nucleation for vertical ZnO nanorod growth on an ITO substrate was found to be affected by changes in the first-step potential and first-step duration

  18. Deformation dependent TUL multi-step direct model

    International Nuclear Information System (INIS)

    Wienke, H.; Capote, R.; Herman, M.; Sin, M.

    2008-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, 'deformed' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the 'spherical' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations. (authors)

  19. Real-Time Decision Making and Aggressive Behavior in Youth: A Heuristic Model of Response Evaluation and Decision (RED).

    Science.gov (United States)

    Fontaine, Reid Griffith; Dodge, Kenneth A

    2006-11-01

    Considerable scientific and intervention attention has been paid to judgment and decision-making systems associated with aggressive behavior in youth. However, most empirical studies have investigated social-cognitive correlates of stable child and adolescent aggressiveness, and less is known about real-time decision making to engage in aggressive behavior. A model of real-time decision making must incorporate both impulsive actions and rational thought. The present paper advances a process model (response evaluation and decision; RED) of real-time behavioral judgments and decision making in aggressive youths with mathematic representations that may be used to quantify response strength. These components are a heuristic to describe decision making, though it is doubtful that individuals always mentally complete these steps. RED represents an organization of social-cognitive operations believed to be active during the response decision step of social information processing. The model posits that RED processes can be circumvented through impulsive responding. This article provides a description and integration of thoughtful, rational decision making and nonrational impulsivity in aggressive behavioral interactions.

  20. Investigation to biodiesel production by the two-step homogeneous base-catalyzed transesterification.

    Science.gov (United States)

    Ye, Jianchu; Tu, Song; Sha, Yong

    2010-10-01

    For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  2. The role of particle jamming on the formation and stability of step-pool morphology: insight from a reduced-complexity model

    Science.gov (United States)

    Saletti, M.; Molnar, P.; Hassan, M. A.

    2017-12-01

    Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and

  3. Effect of increased exposure times on amount of residual monomer released from single-step self-etch adhesives.

    Science.gov (United States)

    Altunsoy, Mustafa; Botsali, Murat Selim; Tosun, Gonca; Yasar, Ahmet

    2015-10-16

    The aim of this study was to evaluate the effect of increased exposure times on the amount of residual Bis-GMA, TEGDMA, HEMA and UDMA released from single-step self-etch adhesive systems. Two adhesive systems were used. The adhesives were applied to bovine dentin surface according to the manufacturer's instructions and were polymerized using an LED curing unit for 10, 20 and 40 seconds (n = 5). After polymerization, the specimens were stored in 75% ethanol-water solution (6 mL). Residual monomers (Bis-GMA, TEGDMA, UDMA and HEMA) that were eluted from the adhesives (after 10 minutes, 1 hour, 1 day, 7 days and 30 days) were analyzed by high-performance liquid chromatography (HPLC). The data were analyzed using 1-way analysis of variance and Tukey HSD tests. Among the time periods, the highest amount of released residual monomers from adhesives was observed in the 10th minute. There were statistically significant differences regarding released Bis-GMA, UDMA, HEMA and TEGDMA between the adhesive systems (p<0.05). There were no significant differences among the 10, 20 and 40 second polymerization times according to their effect on residual monomer release from adhesives (p>0.05). Increasing the polymerization time did not have an effect on residual monomer release from single-step self-etch adhesives.

  4. Preimages for Step-Reduced SHA-2

    DEFF Research Database (Denmark)

    Aoki, Kazumaro; Guo, Jian; Matusiewicz, Krystian

    2009-01-01

    In this paper, we present preimage attacks on up to 43-step SHA-256 (around 67% of the total 64 steps) and 46-step SHA-512 (around 57.5% of the total 80 steps), which significantly increases the number of attacked steps compared to the best previously published preimage attack working for 24 steps....... The time complexities are 2^251.9, 2^509 for finding pseudo-preimages and 2^254.9, 2^511.5 compression function operations for full preimages. The memory requirements are modest, around 2^6 words for 43-step SHA-256 and 46-step SHA-512. The pseudo-preimage attack also applies to 43-step SHA-224 and SHA-384...

  5. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  6. Composition of single-step media used for human embryo culture.

    Science.gov (United States)

    Morbeck, Dean E; Baumann, Nikola A; Oglesbee, Devin

    2017-04-01

    To determine compositions of commercial single-step culture media and test with a murine model whether differences in composition are biologically relevant. Experimental laboratory study. University-based laboratory. Inbred female mice were superovulated and mated with outbred male mice. Amino acid, organic acid, and ions content were determined for single-step culture media: CSC, Global, G-TL, and 1-Step. To determine whether differences in composition of these media are biologically relevant, mouse one-cell embryos were cultured for 96 hours in each culture media at 5% and 20% oxygen in a time-lapse incubator. Compositions of four culture media were analyzed for concentrations of 30 amino acids, organic acids, and ions. Blastocysts at 96 hours of culture and cell cycle timings were calculated, and experiments were repeated in triplicate. Of the more than 30 analytes, concentrations of glucose, lactate, pyruvate, amino acids, phosphate, calcium, and magnesium varied in concentrations. Mouse embryos were differentially affected by oxygen in G-TL and 1-Step. Four single-step culture media have compositions that vary notably in pyruvate, lactate, and amino acids. Blastocyst development was affected by culture media and its interaction with oxygen concentration. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  7. Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  8. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    Science.gov (United States)

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Microsoft Office SharePoint Designer 2007 Step by Step

    CERN Document Server

    Coventry, Penelope

    2008-01-01

    The smart way to learn Office SharePoint Designer 2007-one step at a time! Work at your own pace through the easy numbered steps, practice files on CD, helpful hints, and troubleshooting tips to master the fundamentals of building customized SharePoint sites and applications. You'll learn how to work with Windows® SharePoint Services 3.0 and Office SharePoint Server 2007 to create Web pages complete with Cascading Style Sheets, Lists, Libraries, and customized Web parts. Then, make your site really work for you by adding data sources, including databases, XML data and Web services, and RSS fe

  10. DEFORMATION DEPENDENT TUL MULTI-STEP DIRECT MODEL

    International Nuclear Information System (INIS)

    WIENKE, H.; CAPOTE, R.; HERMAN, M.; SIN, M.

    2007-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended in order to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, ''deformed'' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the ''spherical'' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations

  11. Extinction time of a stochastic predator-prey model by the generalized cell mapping method

    Science.gov (United States)

    Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao

    2018-03-01

    The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.

  12. Electrohydraulic linear actuator with two stepping motors controlled by overshoot-free algorithm

    Science.gov (United States)

    Milecki, Andrzej; Ortmann, Jarosław

    2017-11-01

    The paper describes electrohydraulic spool valves with stepping motors used as electromechanical transducers. A new concept of a proportional valve in which two stepping motors are working differentially is introduced. Such valve changes the fluid flow proportionally to the sum or difference of the motors' steps numbers. The valve design and principle of its operation is described. Theoretical equations and simulation models are proposed for all elements of the drive, i.e., the stepping motor units, hydraulic valve and cylinder. The main features of the valve and drive operation are described; some specific problem areas covering the nature of stepping motors and their differential work in the valve are also considered. The whole servo drive non-linear model is proposed and used further for simulation investigations. The initial simulation investigations of the drive with a new valve have shown that there is a significant overshoot in the drive step response, which is not allowed in positioning process. Therefore additional effort is spent to reduce the overshoot and in consequence reduce the settling time. A special predictive algorithm is proposed to this end. Then the proposed control method is tested and further improved in simulations. Further on, the model is implemented in reality and the whole servo drive system is tested. The investigation results presented in this paper, are showing an overshoot-free positioning process which enables high positioning accuracy.

  13. Probabilistic forecasting of wind power at the minute time-scale with Markov-switching autoregressive models

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    2008-01-01

    Better modelling and forecasting of very short-term power fluctuations at large offshore wind farms may significantly enhance control and management strategies of their power output. The paper introduces a new methodology for modelling and forecasting such very short-term fluctuations. The proposed...... consists in 1-step ahead forecasting exercise on time-series of wind generation with a time resolution of 10 minute. The quality of the introduced forecasting methodology and its interest for better understanding power fluctuations are finally discussed....... methodology is based on a Markov-switching autoregressive model with time-varying coefficients. An advantage of the method is that one can easily derive full predictive densities. The quality of this methodology is demonstrated from the test case of 2 large offshore wind farms in Denmark. The exercise...

  14. Effect of One-Step and Multi-Steps Polishing System on Enamel Roughness

    Directory of Open Access Journals (Sweden)

    Cynthia Sumali

    2013-07-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The final procedures of orthodontic treatment are bracket debonding and cleaning the remaining adhesive. Multi-step polishing system is the most common method used. The disadvantage of that system is long working time, because of the stages that should be done. Therefore, dental material manufacturer make an improvement to the system, to reduce several stages into one stage only. This new system is known as one-step polishing system. Objective: To compare the effect of one-step and multi-step polishing system on enamel roughness after orthodontic bracket debonding. Methods: Randomized control trial was conducted included twenty-eight maxillary premolar randomized into two polishing system; one-step OptraPol (Ivoclar, Vivadent and multi-step AstroPol (Ivoclar, Vivadent. After bracket debonding, the remaining adhesive on each group was cleaned by subjective polishing system for ninety seconds using low speed handpiece. The enamel roughness was subjected to profilometer, registering two roughness parameters (Ra, Rz. Independent t-test was used to analyze the mean score of enamel roughness in each group. Results: There was no significant difference of enamel roughness between one-step and multi-step polishing system (p>0.005. Conclusion: One-step polishing system can produce a similar enamel roughness to multi-step polishing system after bracket debonding and adhesive cleaning.DOI: 10.14693/jdi.v19i3.136

  15. Developing a framework to model the primary drying step of a continuous freeze-drying process based on infrared radiation

    DEFF Research Database (Denmark)

    Van Bockstal, Pieter-Jan; Corver, Jos; Mortier, Séverine Thérèse F.C.

    2018-01-01

    . These results assist in the selection of proper materials which could serve as IR window in the continuous freeze-drying prototype. The modelling framework presented in this paper fits the model-based design approach used for the development of this prototype and shows the potential benefits of this design...... requires the fundamental mechanistic modelling of each individual process step. Therefore, a framework is presented for the modelling and control of the continuous primary drying step based on non-contact IR radiation. The IR radiation emitted by the radiator filaments passes through various materials...

  16. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring

    2011-05-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  17. SPAR-H Step-by-Step Guidance

    International Nuclear Information System (INIS)

    Galyean, W.J.; Whaley, A.M.; Kelly, D.L.; Boring, R.L.

    2011-01-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  18. Performance of a Predictive Model for Calculating Ascent Time to a Target Temperature

    Directory of Open Access Journals (Sweden)

    Jin Woo Moon

    2016-12-01

    Full Text Available The aim of this study was to develop an artificial neural network (ANN prediction model for controlling building heating systems. This model was used to calculate the ascent time of indoor temperature from the setback period (when a building was not occupied to a target setpoint temperature (when a building was occupied. The calculated ascent time was applied to determine the proper moment to start increasing the temperature from the setback temperature to reach the target temperature at an appropriate time. Three major steps were conducted: (1 model development; (2 model optimization; and (3 performance evaluation. Two software programs—Matrix Laboratory (MATLAB and Transient Systems Simulation (TRNSYS—were used for model development, performance tests, and numerical simulation methods. Correlation analysis between input variables and the output variable of the ANN model revealed that two input variables (current indoor air temperature and temperature difference from the target setpoint temperature, presented relatively strong relationships with the ascent time to the target setpoint temperature. These two variables were used as input neurons. Analyzing the difference between the simulated and predicted values from the ANN model provided the optimal number of hidden neurons (9, hidden layers (3, moment (0.9, and learning rate (0.9. At the study’s conclusion, the optimized model proved its prediction accuracy with acceptable errors.

  19. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley; Dana L. Kelly; Ronald L. Boring; William J. Galyean

    2012-06-01

    Step-by-step guidance was developed recently at Idaho National Laboratory for the US Nuclear Regulatory Commission on the use of the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method for quantifying Human Failure Events (HFEs). This work was done to address SPAR-H user needs, specifically requests for additional guidance on the proper application of various aspects of the methodology. This paper overviews the steps of the SPAR-H analysis process and highlights some of the most important insights gained during the development of the step-by-step directions. This supplemental guidance for analysts is applicable when plant-specific information is available, and goes beyond the general guidance provided in existing SPAR-H documentation. The steps highlighted in this paper are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff.

  20. Effect of the processing steps on compositions of table olive since harvesting time to pasteurization.

    Science.gov (United States)

    Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A

    2013-08-01

    Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process.

  1. Coconut Model for Learning First Steps of Craniotomy Techniques and Cerebrospinal Fluid Leak Avoidance.

    Science.gov (United States)

    Drummond-Braga, Bernardo; Peleja, Sebastião Berquó; Macedo, Guaracy; Drummond, Carlos Roberto S A; Costa, Pollyana H V; Garcia-Zapata, Marco T; Oliveira, Marcelo Magaldi

    2016-12-01

    Neurosurgery simulation has gained attention recently due to changes in the medical system. First-year neurosurgical residents in low-income countries usually perform their first craniotomy on a real subject. Development of high-fidelity, cheap, and largely available simulators is a challenge in residency training. An original model for the first steps of craniotomy with cerebrospinal fluid leak avoidance practice using a coconut is described. The coconut is a drupe from Cocos nucifera L. (coconut tree). The green coconut has 4 layers, and some similarity can be seen between these layers and the human skull. The materials used in the simulation are the same as those used in the operating room. The coconut is placed on the head holder support with the face up. The burr holes are made until endocarp is reached. The mesocarp is dissected, and the conductor is passed from one hole to the other with the Gigli saw. The hook handle for the wire saw is positioned, and the mesocarp and endocarp are cut. After sawing the 4 margins, mesocarp is detached from endocarp. Four burr holes are made from endocarp to endosperm. Careful dissection of the endosperm is done, avoiding liquid albumen leak. The Gigli saw is passed through the trephine holes. Hooks are placed, and the endocarp is cut. After cutting the 4 margins, it is dissected from the endosperm and removed. The main goal of the procedure is to remove the endocarp without fluid leakage. The coconut model for learning the first steps of craniotomy and cerebrospinal fluid leak avoidance has some limitations. It is more realistic while trying to remove the endocarp without damage to the endosperm. It is also cheap and can be widely used in low-income countries. However, the coconut does not have anatomic landmarks. The mesocarp makes the model less realistic because it has fibers that make the procedure more difficult and different from a real craniotomy. The model has a potential pedagogic neurosurgical application for

  2. Estimation of time-varying pollutant emission rates in a ventilated enclosure: inversion of a reduced model obtained by experimental application of the modal identification method

    International Nuclear Information System (INIS)

    Girault, M; Maillet, D; Bonthoux, F; Galland, B; Martin, P; Braconnier, R; Fontaine, J R

    2008-01-01

    A method is proposed for the estimation of time-varying emission rates of pollutant sources in a ventilated enclosure, through the resolution of an inverse forced convection problem. Unsteady transport–diffusion of the pollutant is considered, with the assumption of a stationary velocity field remaining unchanged during emission (passive contaminant). The pollutant transport equation is therefore linear with respect to concentration. The source's location is also supposed to be known. As the first step, a reduced model (RM) linking concentrations at a set of control points to emission rates of sources is identified from experimental data by using the modal identification method (MIM). This parameter estimation problem uses transient contaminant concentration measurements made at control points inside the ventilated enclosure, corresponding to increasing and decreasing steps of emission rates. Such experimental modelling allows us to avoid dealing with a CFD code involving turbulence modelling and to get rid of uncertainties about sensors position. In a second step, the identified RM is used to solve an inverse forced convection problem: from contaminant concentration measured at the same control points, rates of sources emitting simultaneously are estimated with a sequential in time algorithm using future time steps

  3. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  4. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  5. Novel Ordered Stepped-Wedge Cluster Trial Designs for Detecting Ebola Vaccine Efficacy Using a Spatially Structured Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Ibrahim Diakite

    2016-08-01

    Full Text Available During the 2014 Ebola virus disease (EVD outbreak, policy-makers were confronted with difficult decisions on how best to test the efficacy of EVD vaccines. On one hand, many were reluctant to withhold a vaccine that might prevent a fatal disease from study participants randomized to a control arm. On the other, regulatory bodies called for rigorous placebo-controlled trials to permit direct measurement of vaccine efficacy prior to approval of the products. A stepped-wedge cluster study (SWCT was proposed as an alternative to a more traditional randomized controlled vaccine trial to address these concerns. Here, we propose novel "ordered stepped-wedge cluster trial" (OSWCT designs to further mitigate tradeoffs between ethical concerns, logistics, and statistical rigor.We constructed a spatially structured mathematical model of the EVD outbreak in Sierra Leone. We used the output of this model to simulate and compare a series of stepped-wedge cluster vaccine studies. Our model reproduced the observed order of first case occurrence within districts of Sierra Leone. Depending on the infection risk within the trial population and the trial start dates, the statistical power to detect a vaccine efficacy of 90% varied from 14% to 32% for standard SWCT, and from 67% to 91% for OSWCTs for an alpha error of 5%. The model's projection of first case occurrence was robust to changes in disease natural history parameters.Ordering clusters in a step-wedge trial based on the cluster's underlying risk of infection as predicted by a spatial model can increase the statistical power of a SWCT. In the event of another hemorrhagic fever outbreak, implementation of our proposed OSWCT designs could improve statistical power when a step-wedge study is desirable based on either ethical concerns or logistical constraints.

  6. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  7. Detection and Correction of Step Discontinuities in Kepler Flux Time Series

    Science.gov (United States)

    Kolodziejczak, J. J.; Morris, R. L.

    2011-01-01

    PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].

  8. A step-by-step methodology for enterprise interoperability projects

    Science.gov (United States)

    Chalmeta, Ricardo; Pazos, Verónica

    2015-05-01

    Enterprise interoperability is one of the key factors for enhancing enterprise competitiveness. Achieving enterprise interoperability is an extremely complex process which involves different technological, human and organisational elements. In this paper we present a framework to help enterprise interoperability. The framework has been developed taking into account the three domains of interoperability: Enterprise Modelling, Architecture and Platform and Ontologies. The main novelty of the framework in comparison to existing ones is that it includes a step-by-step methodology that explains how to carry out an enterprise interoperability project taking into account different interoperability views, like business, process, human resources, technology, knowledge and semantics.

  9. First-time whole blood donation: A critical step for donor safety and retention on first three donations.

    Science.gov (United States)

    Gillet, P; Rapaille, A; Benoît, A; Ceinos, M; Bertrand, O; de Bouyalsky, I; Govaerts, B; Lambermont, M

    2015-01-01

    Whole blood donation is generally safe although vasovagal reactions can occur (approximately 1%). Risk factors are well known and prevention measures are shown as efficient. This study evaluates the impact of the donor's retention in relation to the occurrence of vasovagal reaction for the first three blood donations. Our study of data collected over three years evaluated the impact of classical risk factors and provided a model including the best combination of covariates predicting VVR. The impact of a reaction at first donation on return rate and complication until the third donation was evaluated. Our data (523,471 donations) confirmed the classical risk factors (gender, age, donor status and relative blood volume). After stepwise variable selection, donor status, relative blood volume and their interaction were the only remaining covariates in the model. Of 33,279 first-time donors monitored over a period of at least 15 months, the first three donations were followed. Data emphasised the impact of complication at first donation. The return rate for a second donation was reduced and the risk of vasovagal reaction was increased at least until the third donation. First-time donation is a crucial step in the donors' career. Donors who experienced a reaction at their first donation have a lower return rate for a second donation and a higher risk of vasovagal reaction at least until the third donation. Prevention measures have to be processed to improve donor retention and provide blood banks with adequate blood supply. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  10. A Step Forward to Closing the Loop between Static and Dynamic Reservoir Modeling

    Directory of Open Access Journals (Sweden)

    Cancelliere M.

    2014-12-01

    Full Text Available The current trend for history matching is to find multiple calibrated models instead of a single set of model parameters that match the historical data. The advantage of several current workflows involving assisted history matching techniques, particularly those based on heuristic optimizers or direct search, is that they lead to a number of calibrated models that partially address the problem of the non-uniqueness of the solutions. The importance of achieving multiple solutions is that calibrated models can be used for a true quantification of the uncertainty affecting the production forecasts, which represent the basis for technical and economic risk analysis. In this paper, the importance of incorporating the geological uncertainties in a reservoir study is demonstrated. A workflow, which includes the analysis of the uncertainty associated with the facies distribution for a fluvial depositional environment in the calibration of the numerical dynamic models and, consequently, in the production forecast, is presented. The first step in the workflow was to generate a set of facies realizations starting from different conceptual models. After facies modeling, the petrophysical properties were assigned to the simulation domains. Then, each facies realization was calibrated separately by varying permeability and porosity fields. Data assimilation techniques were used to calibrate the models in a reasonable span of time. Results showed that even the adoption of a conceptual model for facies distribution clearly representative of the reservoir internal geometry might not guarantee reliable results in terms of production forecast. Furthermore, results also showed that realizations which seem fully acceptable after calibration were not representative of the true reservoir internal configuration and provided wrong production forecasts; conversely, realizations which did not show a good fit of the production data could reliably predict the reservoir

  11. Reconstruction of networks from one-step data by matching positions

    Science.gov (United States)

    Wu, Jianshe; Dang, Ni; Jiao, Yang

    2018-05-01

    It is a challenge in estimating the topology of a network from short time series data. In this paper, matching positions is developed to reconstruct the topology of a network from only one-step data. We consider a general network model of coupled agents, in which the phase transformation of each node is determined by its neighbors. From the phase transformation information from one step to the next, the connections of the tail vertices are reconstructed firstly by the matching positions. Removing the already reconstructed vertices, and repeatedly reconstructing the connections of tail vertices, the topology of the entire network is reconstructed. For sparse scale-free networks with more than ten thousands nodes, we almost obtain the actual topology using only the one-step data in simulations.

  12. IMPLEMENTATION OF THE IMPROVED QUASI-STATIC METHOD IN RATTLESNAKE/MOOSE FOR TIME-DEPENDENT RADIATION TRANSPORT MODELLING

    Energy Technology Data Exchange (ETDEWEB)

    Zachary M. Prince; Jean C. Ragusa; Yaqi Wang

    2016-02-01

    Because of the recent interest in reactor transient modeling and the restart of the Transient Reactor (TREAT) Facility, there has been a need for more efficient, robust methods in computation frameworks. This is the impetus of implementing the Improved Quasi-Static method (IQS) in the RATTLESNAKE/MOOSE framework. IQS has implemented with CFEM diffusion by factorizing flux into time-dependent amplitude and spacial- and weakly time-dependent shape. The shape evaluation is very similar to a flux diffusion solve and is computed at large (macro) time steps. While the amplitude evaluation is a PRKE solve where the parameters are dependent on the shape and is computed at small (micro) time steps. IQS has been tested with a custom one-dimensional example and the TWIGL ramp benchmark. These examples prove it to be a viable and effective method for highly transient cases. More complex cases are intended to be applied to further test the method and its implementation.

  13. Enabling Real-time Water Decision Support Services Using Model as a Service

    Science.gov (United States)

    Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.

    2014-12-01

    Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.

  14. From atoms to steps: The microscopic origins of crystal evolution

    Science.gov (United States)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2014-07-01

    The Burton-Cabrera-Frank (BCF) theory of crystal growth has been successful in describing a wide range of phenomena in surface physics. Typical crystal surfaces are slightly misoriented with respect to a facet plane; thus, the BCF theory views such systems as composed of staircase-like structures of steps separating terraces. Adsorbed atoms (adatoms), which are represented by a continuous density, diffuse on terraces, and steps move by absorbing or emitting these adatoms. Here we shed light on the microscopic origins of the BCF theory by deriving a simple, one-dimensional (1D) version of the theory from an atomistic, kinetic restricted solid-on-solid (KRSOS) model without external material deposition. We define the time-dependent adatom density and step position as appropriate ensemble averages in the KRSOS model, thereby exposing the non-equilibrium statistical mechanics origins of the BCF theory. Our analysis reveals that the BCF theory is valid in a low adatom-density regime, much in the same way that an ideal gas approximation applies to dilute gasses. We find conditions under which the surface remains in a low-density regime and discuss the microscopic origin of corrections to the BCF model.

  15. On the choice of the demand and hydraulic modeling approach to WDN real-time simulation

    Science.gov (United States)

    Creaco, Enrico; Pezzinga, Giuseppe; Savic, Dragan

    2017-07-01

    This paper aims to analyze two demand modeling approaches, i.e., top-down deterministic (TDA) and bottom-up stochastic (BUA), with particular reference to their impact on the hydraulic modeling of water distribution networks (WDNs). In the applications, the hydraulic modeling is carried out through the extended period simulation (EPS) and unsteady flow modeling (UFM). Taking as benchmark the modeling conditions that are closest to the WDN's real operation (UFM + BUA), the analysis showed that the traditional use of EPS + TDA produces large pressure head and water discharge errors, which can be attenuated only when large temporal steps (up to 1 h in the case study) are used inside EPS. The use of EPS + BUA always yields better results. Indeed, EPS + BUA already gives a good approximation of the WDN's real operation when intermediate temporal steps (larger than 2 min in the case study) are used for the simulation. The trade-off between consistency of results and computational burden makes EPS + BUA the most suitable tool for real-time WDN simulation, while benefitting from data acquired through smart meters for the parameterization of demand generation models.

  16. STEP-TRAMM - A modeling interface for simulating localized rainfall induced shallow landslides and debris flow runout pathways

    Science.gov (United States)

    Or, D.; von Ruette, J.; Lehmann, P.

    2017-12-01

    Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.

  17. A deformable surface model for real-time water drop animation.

    Science.gov (United States)

    Zhang, Yizhong; Wang, Huamin; Wang, Shuai; Tong, Yiying; Zhou, Kun

    2012-08-01

    A water drop behaves differently from a large water body because of its strong viscosity and surface tension under the small scale. Surface tension causes the motion of a water drop to be largely determined by its boundary surface. Meanwhile, viscosity makes the interior of a water drop less relevant to its motion, as the smooth velocity field can be well approximated by an interpolation of the velocity on the boundary. Consequently, we propose a fast deformable surface model to realistically animate water drops and their flowing behaviors on solid surfaces. Our system efficiently simulates water drop motions in a Lagrangian fashion, by reducing 3D fluid dynamics over the whole liquid volume to a deformable surface model. In each time step, the model uses an implicit mean curvature flow operator to produce surface tension effects, a contact angle operator to change droplet shapes on solid surfaces, and a set of mesh connectivity updates to handle topological changes and improve mesh quality over time. Our numerical experiments demonstrate a variety of physically plausible water drop phenomena at a real-time rate, including capillary waves when water drops collide, pinch-off of water jets, and droplets flowing over solid materials. The whole system performs orders-of-magnitude faster than existing simulation approaches that generate comparable water drop effects.

  18. Electric-current-induced step bunching on Si(111)

    International Nuclear Information System (INIS)

    Homma, Yoshikazu; Aizawa, Noriyuki

    2000-01-01

    We experimentally investigated step bunching induced by direct current on vicinal Si(111)'1x1' surfaces using scanning electron microscopy and atomic force microscopy. The scaling relation between the average step spacing l b and the number of steps N in a bunch, l b ∼N -α , was determined for four step-bunching temperature regimes above the 7x7-'1x1' transition temperature. The step-bunching rate and scaling exponent differ between neighboring step-bunching regimes. The exponent α is 0.7 for the two regimes where the step-down current induces step bunching (860-960 and 1210-1300 deg. C), and 0.6 for the two regimes where the step-up current induces step bunching (1060-1190 and >1320 deg. C). The number of single steps on terraces also differs in each of the four temperature regimes. For temperatures higher than 1280 deg. C, the prefactor of the scaling relation increases, indicating an increase in step-step repulsion. The scaling exponents obtained agree reasonably well with those predicted by theoretical models. However, they give unrealistic values for the effective charges of adatoms for step-up-current-induced step bunching when the 'transparent' step model is used

  19. Step-wise stimulated martensitic transformations

    International Nuclear Information System (INIS)

    Airoldi, G.; Riva, G.

    1991-01-01

    NiTi alloys, widely known both for their shape memory properties and for unusual pseudoelastic behaviour, are now on the forefront attention for step-wise induced memory processes, thermal or stress stimulated. Literature results related to step-wise stimulated martensite (direct transformation) are examined and contrasted with step-wise thermal stimulated parent phase (reverse transformation). Hypothesis are given to explain the key characters of both transformations, a thermodynamic model from first principles being till now lacking

  20. Small Town Energy Program (STEP) Final Report revised

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Charles (Chuck) T.

    2014-01-02

    University Park, Maryland (“UP”) is a small town of 2,540 residents, 919 homes, 2 churches, 1 school, 1 town hall, and 1 breakthrough community energy efficiency initiative: the Small Town Energy Program (“STEP”). STEP was developed with a mission to “create a model community energy transformation program that serves as a roadmap for other small towns across the U.S.” STEP first launched in January 2011 in UP and expanded in July 2012 to the neighboring communities of Hyattsville, Riverdale Park, and College Heights Estates, MD. STEP, which concluded in July 2013, was generously supported by a grant from the U.S. Department of Energy (DOE). The STEP model was designed for replication in other resource-constrained small towns similar to University Park - a sector largely neglected to date in federal and state energy efficiency programs. STEP provided a full suite of activities for replication, including: energy audits and retrofits for residential buildings, financial incentives, a community-based social marketing backbone and local community delivery partners. STEP also included the highly innovative use of an “Energy Coach” who worked one-on-one with clients throughout the program. Please see www.smalltownenergy.org for more information. In less than three years, STEP achieved the following results in University Park: • 30% of community households participated voluntarily in STEP; • 25% of homes received a Home Performance with ENERGY STAR assessment; • 16% of households made energy efficiency improvements to their home; • 64% of households proceeded with an upgrade after their assessment; • 9 Full Time Equivalent jobs were created or retained, and 39 contractors worked on STEP over the course of the project. Estimated Energy Savings - Program Totals kWh Electricity 204,407 Therms Natural Gas 24,800 Gallons of Oil 2,581 Total Estimated MMBTU Saved (Source Energy) 5,474 Total Estimated Annual Energy Cost Savings $61,343 STEP clients who

  1. Formation of complex wedding-cake morphologies during homoepitaxial film growth of Ag on Ag(111): atomistic, step-dynamics, and continuum modeling

    International Nuclear Information System (INIS)

    Li Maozhi; Han, Yong; Thiel, P A; Evans, J W

    2009-01-01

    An atomistic lattice-gas model is developed which successfully describes all key features of the complex mounded morphologies which develop during deposition of Ag films on Ag(111) surfaces. We focus on this homoepitaxial thin film growth process below 200 K. The unstable multilayer growth mode derives from the presence of a large Ehrlich-Schwoebel step-edge barrier, for which we characterize both the step-orientation dependence and the magnitude. Step-dynamics modeling is applied to further characterize and elucidate the evolution of the vertical profiles of these wedding-cake-like mounds. Suitable coarse-graining of these step-dynamics equations leads to instructive continuum formulations for mound evolution.

  2. A theory of the stepped leader in lightning

    International Nuclear Information System (INIS)

    Lowke, J.J.

    1999-01-01

    There is no generally accepted explanation of the stepped leader behaviour in terms of basic physical processes. Existing theories generally involve significant gas heating within the stepped leader. In the present paper, the stepped nature of the leader is proposed to arise due to a combination of two physical phenomena. Electron transport is dominant over ion transport, during the luminous step stage, because electron mobilities are about 100 times larger than ion mobilities, and the streamer front velocity is determined by electron ionization effects. During the dark time between steps, there are only ions and charge transport is very much slower. The second effect leading to stepped behaviour arises because the electric field required for electric breakdown in air prior to a discharge is ∼30kV/cm, and is very much higher than the electric field of 5kV/cm that is required to sustain a glow discharge in air. During the luminous step stage, electrons tend to produce space charges to make a uniform field in the streamer of ∼5kV/cm. During the dark time between steps, there are no electrons but only ions. Time is required for ion drift to produce a space charge sheath of negative ions at the head of the streamer to produce a field of ∼30kV/cm sufficient for electron ionization to produce a new luminous step

  3. Assimilation of LAI time-series in crop production models

    Science.gov (United States)

    Kooistra, Lammert; Rijk, Bert; Nannes, Louis

    2014-05-01

    Agriculture is worldwide a large consumer of freshwater, nutrients and land. Spatial explicit agricultural management activities (e.g., fertilization, irrigation) could significantly improve efficiency in resource use. In previous studies and operational applications, remote sensing has shown to be a powerful method for spatio-temporal monitoring of actual crop status. As a next step, yield forecasting by assimilating remote sensing based plant variables in crop production models would improve agricultural decision support both at the farm and field level. In this study we investigated the potential of remote sensing based Leaf Area Index (LAI) time-series assimilated in the crop production model LINTUL to improve yield forecasting at field level. The effect of assimilation method and amount of assimilated observations was evaluated. The LINTUL-3 crop production model was calibrated and validated for a potato crop on two experimental fields in the south of the Netherlands. A range of data sources (e.g., in-situ soil moisture and weather sensors, destructive crop measurements) was used for calibration of the model for the experimental field in 2010. LAI from cropscan field radiometer measurements and actual LAI measured with the LAI-2000 instrument were used as input for the LAI time-series. The LAI time-series were assimilated in the LINTUL model and validated for a second experimental field on which potatoes were grown in 2011. Yield in 2011 was simulated with an R2 of 0.82 when compared with field measured yield. Furthermore, we analysed the potential of assimilation of LAI into the LINTUL-3 model through the 'updating' assimilation technique. The deviation between measured and simulated yield decreased from 9371 kg/ha to 8729 kg/ha when assimilating weekly LAI measurements in the LINTUL model over the season of 2011. LINTUL-3 furthermore shows the main growth reducing factors, which are useful for farm decision support. The combination of crop models and sensor

  4. A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation

    Directory of Open Access Journals (Sweden)

    Patrícia Ramos

    2016-11-01

    Full Text Available In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences and, if the time series is seasonal, seasonal differencing (up to first order differences. The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.

  5. Control-Oriented Models for Real-Time Simulation of Automotive Transmission Systems

    Directory of Open Access Journals (Sweden)

    Cavina N.

    2015-01-01

    Full Text Available A control-oriented model of a Dual Clutch Transmission (DCT was developed for real-time Hardware In the Loop (HIL applications, to support model-based development of the DCT controller and to systematically test its performance. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a simulation step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, which was then implemented in a HIL system and connected to the Transmission Control Unit (TCU. Several tests have been performed on the HIL simulator, to verify the TCU performance: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control actions performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. A test automation procedure has finally been developed to permit the execution of a pattern of tests without the interaction of the user; perfectly repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

  6. Effect of moisture and drying time on the bond strength of the one-step self-etching adhesive system

    Directory of Open Access Journals (Sweden)

    Yoon Lee

    2012-08-01

    Full Text Available Objectives To investigate the effect of dentin moisture degree and air-drying time on dentin-bond strength of two different one-step self-etching adhesive systems. Materials and Methods Twenty-four human third molars were used for microtensile bond strength testing of G-Bond and Clearfil S3 Bond. The dentin surface was either blot-dried or air-dried before applying these adhesive agents. After application of the adhesive agent, three different air drying times were evaluated: 1, 5, and 10 sec. Composite resin was build up to 4 mm thickness and light cured for 40 sec with 2 separate layers. Then the tooth was sectioned and trimmed to measure the microtensile bond strength using a universal testing machine. The measured bond strengths were analyzed with three-way ANOVA and regression analysis was done (p = 0.05. Results All three factors, materials, dentin wetness and air drying time, showed significant effect on the microtensile bond strength. Clearfil S3 Bond, dry dentin surface and 10 sec air drying time showed higher bond strength. Conclusions Within the limitation of this experiment, air drying time after the application of the one-step self-etching adhesive agent was the most significant factor affecting the bond strength, followed by the material difference and dentin moisture before applying the adhesive agent.

  7. The statistics of multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    We propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. We present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplifications of the leading-particle statistics theory. A more comprehensive exposition will appear before long. (author). 32 refs, 4 figs

  8. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  9. One False Step: "Detroit," "Step" and Movies of Rising and Falling

    Science.gov (United States)

    Beck, Bernard

    2018-01-01

    "Detroit" and "Step" are two recent movies in the context of urban riots in protest of police brutality. They refer to time periods separated by half a century, but there are common themes in the two that seem appropriate to both times. The movies are not primarily concerned with the riot events, but the riot is a major…

  10. Methods for Assessing Item, Step, and Threshold Invariance in Polytomous Items Following the Partial Credit Model

    Science.gov (United States)

    Penfield, Randall D.; Myers, Nicholas D.; Wolfe, Edward W.

    2008-01-01

    Measurement invariance in the partial credit model (PCM) can be conceptualized in several different but compatible ways. In this article the authors distinguish between three forms of measurement invariance in the PCM: step invariance, item invariance, and threshold invariance. Approaches for modeling these three forms of invariance are proposed,…

  11. Modeling of a 3D CMOS sensor for time-of-flight measurements

    Science.gov (United States)

    Kuhla, Rico; Hosticka, Bedrich J.; Mengel, Peter; Listl, Ludwig

    2004-02-01

    A solid state 3D-CMOS camera system for direct time-of-flight image acquisition consisting of a CMOS imaging sensor, a laser diode module for active laser pulse illumination and all optics for image forming is presented, including MDSI & CDS algorithms for time-of-flight evaluation from intensity imaging. The investigation is carried out using ideal and real signals. For real signals the narrow infrared laser pulse of the laser diode module and the shutter function of the sensors column circuit were sampled by a new sampling procedure. A discrete sampled shutter function was recorded by using the impulse response of a narrow pulse of FWHM=50ps and an additional delay block with step size of Δτ = 0.25ns. A deterministic system model based on LTI transfer functions was developed. The visual shutter windows give a good understanding of differences between ideal and real output functions of measurement system. Simulations of shutter and laser pulse brought out an extended linear delay domain from MDSI. A stochastic model for the transfer function and photon noise in time domain was developed. We used the model to investigate noise in variation the laser pulse shutter configuration.

  12. Modeling and Design of MPPT Controller Using Stepped P&O Algorithm in Solar Photovoltaic System

    OpenAIRE

    R. Prakash; B. Meenakshipriya; R. Kumaravelan

    2014-01-01

    This paper presents modeling and simulation of Grid Connected Photovoltaic (PV) system by using improved mathematical model. The model is used to study different parameter variations and effects on the PV array including operating temperature and solar irradiation level. In this paper stepped P&O algorithm is proposed for MPPT control. This algorithm will identify the suitable duty ratio in which the DC-DC converter should be operated to maximize the power output. Photo voltaic array with pro...

  13. Multi-step prediction for influenza outbreak by an adjusted long short-term memory.

    Science.gov (United States)

    Zhang, J; Nawata, K

    2018-05-01

    Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.

  14. Enhancement of dimple formability in sheet metals by 2-step forming

    International Nuclear Information System (INIS)

    Kim, Minsoo; Bang, Sungsik; Lee, Hyungyil; Kim, Naksoo; Kim, Dongchoul

    2014-01-01

    Highlights: • Suggested 2-step model aims at a lower susceptibility to cracking. • Strain at weak point could be reduced by 16% compared to 1-step model. • A more uniform thickness distribution is achieved by the 2-step model. • The maximum stress in the FLSD and the GTN damage variable reduced at the weak point. • The 2-step model provides an enhanced formability compared to the 1-step model. - Abstract: In this study, a 2-step stamping model with an additional 1st stamping tool is proposed to reduce stamping flaws in the curved parts of dimples in nuclear fuel spacer grids. First, the strains in the curved part of the dimple are analyzed and compared with strain solutions for pure bending. A reference 2D FE (finite element) model of the 1-step stamping is established, and the corresponding maximum strain is obtained. FE solutions are obtained for various process variable values for the 1st stamping tool used in the 2-step stamping model. Based on these solutions and applying the RSM (response surface method), strains are expressed as a function of process variables. This function then serves to evaluate optimum process variable values. Finally, by transferring these optimum values to a 3D FE model, we confirm the enhanced formability of the proposed 2-step stamping model

  15. A Step-Indexed Kripke Model of Hidden State via Recursive Properties on Recursively Defined Metric Spaces

    DEFF Research Database (Denmark)

    Schwinghammer, Jan; Birkedal, Lars; Støvring, Kristian

    2011-01-01

    ´eraud and Pottier’s type and capability system including both frame and anti-frame rules. The model is a possible worlds model based on the operational semantics and step-indexed heap relations, and the worlds are constructed as a recursively defined predicate on a recursively defined metric space. We also extend...

  16. Lateral stability of the spring-mass hopper suggests a two-step control strategy for running.

    Science.gov (United States)

    Carver, Sean G; Cowan, Noah J; Guckenheimer, John M

    2009-06-01

    This paper investigates the control of running gaits in the context of a spring loaded inverted pendulum model in three dimensions. Specifically, it determines the minimal number of steps required for an animal to recover from a perturbation to a specified gait. The model has four control inputs per step: two touchdown angles (azimuth and elevation) and two spring constants (compression and decompression). By representing the locomotor movement as a discrete-time return map and using the implicit function theorem we show that the number of recovery steps needed following a perturbation depends upon the goals of the control mechanism. When the goal is to follow a straight line, two steps are necessary and sufficient for small lateral perturbations. Multistep control laws have a larger number of control inputs than outputs, so solutions of the control problem are not unique. Additional constraints, referred to here as synergies, are imposed to determine unique control inputs for perturbations. For some choices of synergies, two-step control can be expressed as two iterations of its first step policy and designed so that recovery occurs in just one step for all perturbations for which one-step recovery is possible.

  17. The transient response of a quantum wave to an instantaneous potential step switching

    Energy Technology Data Exchange (ETDEWEB)

    Delgado, F [Departamento de Quimica-Fisica, Universidad del Pais Vasco, Apdo 644, 48080 Bilbao (Spain); Cruz, H [Departamento de Fisica Basica, Universidad de La Laguna (Spain); Muga, J G [Departamento de Quimica-Fisica, Universidad del Pais Vasco, Apdo 644, 48080 Bilbao (Spain)

    2002-12-06

    The transient response of a stationary state of a quantum particle in a step potential to an instantaneous change in the step height (a simplified model for a sudden bias switch in an electronic semiconductor device) is solved exactly by means of a semianalytical expression. The characteristic times for the transient process up to the new stationary state are identified. A comparison is made between the exact results and an approximate method.

  18. Integrated Modelling - the next steps (Invited)

    Science.gov (United States)

    Moore, R. V.

    2010-12-01

    Integrated modelling (IM) has made considerable advances over the past decade but it has not yet been taken up as an operational tool in the way that its proponents had hoped. The reasons why will be discussed in Session U17. This talk will propose topics for a research and development programme and suggest an institutional structure which, together, could overcome the present obstacles. Their combined aim would be first to make IM into an operational tool useable by competent public authorities and commercial companies and, in time, to see it evolve into the modelling equivalent of Google Maps, something accessible and useable by anyone with a PC or an iphone and an internet connection. In a recent study, a number of government agencies, water authorities and utilities applied integrated modelling to operational problems. While the project demonstrated that IM could be used in an operational setting and had benefit, it also highlighted the advances that would be required for its widespread uptake. These were: greatly improving the ease with which models could be a) made linkable, b) linked and c) run; developing a methodology for applying integrated modelling; developing practical options for calibrating and validating linked models; addressing the science issues that arise when models are linked; extending the range of modelling concepts that can be linked; enabling interface standards to pass uncertainty information; making the interface standards platform independent; extending the range of platforms to include those for high performance computing; developing the concept of modelling components as web services; separating simulation code from the model’s GUI, so that all the results from the linked models can be viewed through a single GUI; developing scenario management systems so that that there is an audit trail of the version of each model and dataset used in each linked model run. In addition to the above, there is a need to build a set of integrated

  19. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.

  20. Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, M.; Borm, P.E.M.; Quant, M.

    2014-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.

  1. Modeling of Step-up Grid-Connected Photovoltaic Systems for Control Purposes

    Directory of Open Access Journals (Sweden)

    Daniel Gonzalez

    2012-06-01

    Full Text Available This paper presents modeling approaches for step-up grid-connected photovoltaic systems intended to provide analytical tools for control design. The first approach is based on a voltage source representation of the bulk capacitor interacting with the grid-connected inverter, which is a common model for large DC buses and closed-loop inverters. The second approach considers the inverter of a double-stage PV system as a Norton equivalent, which is widely accepted for open-loop inverters. In addition, the paper considers both ideal and realistic models for the DC/DC converter that interacts with the PV module, providing four mathematical models to cover a wide range of applications. The models are expressed in state space representation to simplify its use in analysis and control design, and also to be easily implemented in simulation software, e.g., Matlab. The PV system was analyzed to demonstrate the non-minimum phase condition for all the models, which is an important aspect to select the control technique. Moreover, the system observability and controllability were studied to define design criteria. Finally, the analytical results are illustrated by means of detailed simulations, and the paper results are validated in an experimental test bench.

  2. Step-to-step variability in treadmill walking: influence of rhythmic auditory cueing.

    Directory of Open Access Journals (Sweden)

    Philippe Terrier

    Full Text Available While walking, human beings continuously adjust step length (SpL, step time (SpT, step speed (SpS = SpL/SpT and step width (SpW by integrating both feedforward and feedback mechanisms. These motor control processes result in correlations of gait parameters between consecutive strides (statistical persistence. Constraining gait with a speed cue (treadmill and/or a rhythmic auditory cue (metronome, modifies the statistical persistence to anti-persistence. The objective was to analyze whether the combined effect of treadmill and rhythmic auditory cueing (RAC modified not only statistical persistence, but also fluctuation magnitude (standard deviation, SD, and stationarity of SpL, SpT, SpS and SpW. Twenty healthy subjects performed 6 × 5 min. walking tests at various imposed speeds on a treadmill instrumented with foot-pressure sensors. Freely-chosen walking cadences were assessed during the first three trials, and then imposed accordingly in the last trials with a metronome. Fluctuation magnitude (SD of SpT, SpL, SpS and SpW was assessed, as well as NonStationarity Index (NSI, which estimates the dispersion of local means in the times series (SD of 20 local means over 10 steps. No effect of RAC on fluctuation magnitude (SD was observed. SpW was not modified by RAC, what is likely the evidence that lateral foot placement is separately regulated. Stationarity (NSI was modified by RAC in the same manner as persistent pattern: Treadmill induced low NSI in the time series of SpS, and high NSI in SpT and SpL. On the contrary, SpT, SpL and SpS exhibited low NSI under RAC condition. We used relatively short sample of consecutive strides (100 as compared to the usual number of strides required to analyze fluctuation dynamics (200 to 1000 strides. Therefore, the responsiveness of stationarity measure (NSI to cued walking opens the perspective to perform short walking tests that would be adapted to patients with a reduced gait perimeter.

  3. Step responses of a torsional system with multiple clearances: Study of vibro-impact phenomenon using experimental and computational methods

    Science.gov (United States)

    Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra

    2018-01-01

    Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.

  4. Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.

    Science.gov (United States)

    Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence

    2012-08-29

    Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential

  5. Step out-step in sequencing games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2015-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,

  6. Evolution of robot-assisted orthotopic ileal neobladder formation: a step-by-step update to the University of Southern California (USC) technique.

    Science.gov (United States)

    Chopra, Sameer; de Castro Abreu, Andre Luis; Berger, Andre K; Sehgal, Shuchi; Gill, Inderbir; Aron, Monish; Desai, Mihir M

    2017-01-01

    To describe our, step-by-step, technique for robotic intracorporeal neobladder formation. The main surgical steps to forming the intracorporeal orthotopic ileal neobladder are: isolation of 65 cm of small bowel; small bowel anastomosis; bowel detubularisation; suture of the posterior wall of the neobladder; neobladder-urethral anastomosis and cross folding of the pouch; and uretero-enteral anastomosis. Improvements have been made to these steps to enhance time efficiency without compromising neobladder configuration. Our technical improvements have resulted in an improvement in operative time from 450 to 360 min. We describe an updated step-by-step technique of robot-assisted intracorporeal orthotopic ileal neobladder formation. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  7. Lagrangian fractional step method for the incompressible Navier--Stokes equations on a periodic domain

    International Nuclear Information System (INIS)

    Boergers, C.; Peskin, C.S.

    1987-01-01

    In the Lagrangian fractional step method introduced in this paper, the fluid velocity and pressure are defined on a collection of N fluid markers. At each time step, these markers are used to generate a Voronoi diagram, and this diagram is used to construct finite-difference operators corresponding to the divergence, gradient, and Laplacian. The splitting of the Navier--Stokes equations leads to discrete Helmholtz and Poisson problems, which we solve using a two-grid method. The nonlinear convection terms are modeled simply by the displacement of the fluid markers. We have implemented this method on a periodic domain in the plane. We describe an efficient algorithm for the numerical construction of periodic Voronoi diagrams, and we report on numerical results which indicate the the fractional step method is convergent of first order. The overall work per time step is proportional to N log N. copyright 1987 Academic Press, Inc

  8. Step Detection Robust against the Dynamics of Smartphones

    Science.gov (United States)

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  9. Global phenomena from local rules: Peer-to-peer networks and crystal steps

    Science.gov (United States)

    Finkbiner, Amy

    it. We find an time-dependence term for the motion that does not appear in continuum models, and we determine an explicit dependence on step number.

  10. First step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 code

    International Nuclear Information System (INIS)

    Dominguez, L.; Camargo, C.T.M.

    1984-09-01

    The first step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 computer code is presented. This step consists of the introduction of a simplified model for simulating the steam generator. This model is the GEVAP computer code, integrant part of LOOP code, which simulates the primary coolant circuit of PWR nuclear power plants during transients. The ALMOD3 computer code has a model for the steam generator, called UTSG, which is very detailed. This model has spatial dependence, correlations for 2-phase flow, distinguished correlations for different heat transfer process. The GEVAP model has thermal equilibrium between phases (gaseous and liquid homogeneous mixture), no spatial dependence and uses only one generalized correlation to treat several heat transfer processes. (Author) [pt

  11. A stepped-care model of post-disaster child and adolescent mental health service provision.

    Science.gov (United States)

    McDermott, Brett M; Cobham, Vanessa E

    2014-01-01

    From a global perspective, natural disasters are common events. Published research highlights that a significant minority of exposed children and adolescents develop disaster-related mental health syndromes and associated functional impairment. Consistent with the considerable unmet need of children and adolescents with regard to psychopathology, there is strong evidence that many children and adolescents with post-disaster mental health presentations are not receiving adequate interventions. To critique existing child and adolescent mental health services (CAMHS) models of care and the capacity of such models to deal with any post-disaster surge in clinical demand. Further, to detail an innovative service response; a child and adolescent stepped-care service provision model. A narrative review of traditional CAMHS is presented. Important elements of a disaster response - individual versus community recovery, public health approaches, capacity for promotion and prevention and service reach are discussed and compared with the CAMHS approach. Difficulties with traditional models of care are highlighted across all levels of intervention; from the ability to provide preventative initiatives to the capacity to provide intense specialised posttraumatic stress disorder interventions. In response, our over-arching stepped-care model is advocated. The general response is discussed and details of the three tiers of the model are provided: Tier 1 communication strategy, Tier 2 parent effectiveness and teacher training, and Tier 3 screening linked to trauma-focused cognitive behavioural therapy. In this paper, we argue that traditional CAMHS are not an appropriate model of care to meet the clinical needs of this group in the post-disaster setting. We conclude with suggestions how improved post-disaster child and adolescent mental health outcomes can be achieved by applying an innovative service approach.

  12. Generalized Runge-Kutta method for two- and three-dimensional space-time diffusion equations with a variable time step

    International Nuclear Information System (INIS)

    Aboanber, A.E.; Hamada, Y.M.

    2008-01-01

    An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods

  13. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  14. Detection of Tomato black ring virus by real-time one-step RT-PCR.

    Science.gov (United States)

    Harper, Scott J; Delmiglio, Catia; Ward, Lisa I; Clover, Gerard R G

    2011-01-01

    A TaqMan-based real-time one-step RT-PCR assay was developed for the rapid detection of Tomato black ring virus (TBRV), a significant plant pathogen which infects a wide range of economically important crops. Primers and a probe were designed against existing genomic sequences to amplify a 72 bp fragment from RNA-2. The assay amplified all isolates of TBRV tested, but no amplification was observed from the RNA of other nepovirus species or healthy host plants. The detection limit of the assay was estimated to be around nine copies of the TBRV target region in total RNA. A comparison with conventional RT-PCR and ELISA, indicated that ELISA, the current standard test method, lacked specificity and reacted to all nepovirus species tested, while conventional RT-PCR was approximately ten-fold less sensitive than the real-time RT-PCR assay. Finally, the real-time RT-PCR assay was tested using five different RT-PCR reagent kits and was found to be robust and reliable, with no significant differences in sensitivity being found. The development of this rapid assay should aid in quarantine and post-border surveys for regulatory agencies. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Adaptive time-variant models for fuzzy-time-series forecasting.

    Science.gov (United States)

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  16. A stochastic step flow model with growth in 1+1 dimensions

    International Nuclear Information System (INIS)

    Margetis, Dionisios

    2010-01-01

    Mathematical implications of adding Gaussian white noise to the Burton-Cabrera-Frank model for N terraces ('gaps') on a crystal surface are studied under external material deposition for large N. The terraces separate straight, non-interacting line defects (steps) with uniform spacing initially (t = 0). As the growth tends to vanish, the gaps become uncorrelated. First, simple closed-form expressions for the gap variance are obtained directly for small fluctuations. The leading-order, linear stochastic differential equations are prototypical for discrete asymmetric processes. Second, the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy for joint gap densities is formulated. Third, a self-consistent 'mean field' is defined via the BBGKY hierarchy. This field is then determined approximately through a terrace decorrelation hypothesis. Fourth, comparisons are made of directly obtained and mean-field results. Limitations and issues in the modeling of noise are outlined.

  17. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species.

    Science.gov (United States)

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A

    2017-01-01

    Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  18. Iteratively improving Hi-C experiments one step at a time.

    Science.gov (United States)

    Golloshi, Rosela; Sanders, Jacob T; McCord, Rachel Patton

    2018-04-30

    The 3D organization of eukaryotic chromosomes affects key processes such as gene expression, DNA replication, cell division, and response to DNA damage. The genome-wide chromosome conformation capture (Hi-C) approach can characterize the landscape of 3D genome organization by measuring interaction frequencies between all genomic regions. Hi-C protocol improvements and rapid advances in DNA sequencing power have made Hi-C useful to study diverse biological systems, not only to elucidate the role of 3D genome structure in proper cellular function, but also to characterize genomic rearrangements, assemble new genomes, and consider chromatin interactions as potential biomarkers for diseases. Yet, the Hi-C protocol is still complex and subject to variations at numerous steps that can affect the resulting data. Thus, there is still a need for better understanding and control of factors that contribute to Hi-C experiment success and data quality. Here, we evaluate recently proposed Hi-C protocol modifications as well as often overlooked variables in sample preparation and examine their effects on Hi-C data quality. We examine artifacts that can occur during Hi-C library preparation, including microhomology-based artificial template copying and chimera formation that can add noise to the downstream data. Exploring the mechanisms underlying Hi-C artifacts pinpoints steps that should be further optimized in the future. To improve the utility of Hi-C in characterizing the 3D genome of specialized populations of cells or small samples of primary tissue, we identify steps prone to DNA loss which should be considered to adapt Hi-C to lower cell numbers. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    Energy Technology Data Exchange (ETDEWEB)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Maday, Yvon, E-mail: maday@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions and Institut Universitaire de France, F-75005, Paris (France); Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); Brown Univ, Division of Applied Maths, Providence, RI (United States); Riahi, Mohamed Kamel, E-mail: riahi@cmap.polytechnique.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CMAP, Inria-Saclay and X-Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex (France); Salomon, Julien, E-mail: salomon@ceremade.dauphine.fr [CEREMADE, Univ Paris-Dauphine, Pl. du Mal. de Lattre de Tassigny, F-75016, Paris (France)

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity of the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.

  20. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    Science.gov (United States)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  1. Comparison of single-step and two-step purified coagulants from Moringa oleifera seed for turbidity and DOC removal.

    Science.gov (United States)

    Sánchez-Martín, J; Ghebremichael, K; Beltrán-Heredia, J

    2010-08-01

    The coagulant proteins from Moringa oleifera purified with single-step and two-step ion-exchange processes were used for the coagulation of surface water from Meuse river in The Netherlands. The performances of the two purified coagulants and the crude extract were assessed in terms of turbidity and DOC removal. The results indicated that the optimum dosage of the single-step purified coagulant was more than two times higher compared to the two-step purified coagulant in terms of turbidity removal. And the residual DOC in the two-step purified coagulant was lower than in single-step purified coagulant or crude extract. (c) 2010 Elsevier Ltd. All rights reserved.

  2. From representing to modelling knowledge: Proposing a two-step training for excellence in concept mapping

    Directory of Open Access Journals (Sweden)

    Joana G. Aguiar

    2017-09-01

    Full Text Available Training users in the concept mapping technique is critical for ensuring a high-quality concept map in terms of graphical structure and content accuracy. However, assessing excellence in concept mapping through structural and content features is a complex task. This paper proposes a two-step sequential training in concept mapping. The first step requires the fulfilment of low-order cognitive objectives (remember, understand and apply to facilitate novices’ development into good Cmappers by honing their knowledge representation skills. The second step requires the fulfilment of high-order cognitive objectives (analyse, evaluate and create to grow good Cmappers into excellent ones through the development of knowledge modelling skills. Based on Bloom’s revised taxonomy and cognitive load theory, this paper presents theoretical accounts to (1 identify the criteria distinguishing good and excellent concept maps, (2 inform instructional tasks for concept map elaboration and (3 propose a prototype for training users on concept mapping combining online and face-to-face activities. The proposed training application and the institutional certification are the next steps for the mature use of concept maps for educational as well as business purposes.

  3. Predicting United States Medical Licensure Examination Step 2 clinical knowledge scores from previous academic indicators

    Directory of Open Access Journals (Sweden)

    Monteiro KA

    2017-06-01

    Full Text Available Kristina A Monteiro, Paul George, Richard Dollase, Luba Dumenco Office of Medical Education, The Warren Alpert Medical School of Brown University, Providence, RI, USA Abstract: The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE Step 2 clinical knowledge (CK. Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students’ Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218 with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%–69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89 and predicted Step 2 CK score within a mean of four points (SD=8. The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from

  4. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    Science.gov (United States)

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  5. Preparatory steps for a robust dynamic model for organically bound tritium dynamics in agricultural crops

    Energy Technology Data Exchange (ETDEWEB)

    Melintescu, A.; Galeriu, D. [' Horia Hulubei' National Institute for Physics and Nuclear Engineering, Bucharest-Magurele (Romania); Diabate, S.; Strack, S. [Institute of Toxicology and Genetics, Karlsruhe Institute of Technology - KIT, Eggenstein-Leopoldshafen (Germany)

    2015-03-15

    The processes involved in tritium transfer in crops are complex and regulated by many feedback mechanisms. A full mechanistic model is difficult to develop due to the complexity of the processes involved in tritium transfer and environmental conditions. First, a review of existing models (ORYZA2000, CROPTRIT and WOFOST) presenting their features and limits, is made. Secondly, the preparatory steps for a robust model are discussed, considering the role of dry matter and photosynthesis contribution to the OBT (Organically Bound Tritium) dynamics in crops.

  6. Detection of Listeria monocytogenes in ready-to-eat food by Step One real-time polymerase chain reaction.

    Science.gov (United States)

    Pochop, Jaroslav; Kačániová, Miroslava; Hleba, Lukáš; Lopasovský, L'ubomír; Bobková, Alica; Zeleňáková, Lucia; Stričík, Michal

    2012-01-01

    The aim of this study was to follow contamination of ready-to-eat food with Listeria monocytogenes by using the Step One real time polymerase chain reaction (PCR). We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and MicroSEQ® Listeria monocytogenes Detection Kit for the real-time PCR performance. In 30 samples of ready-to-eat milk and meat products without incubation we detected strains of Listeria monocytogenes in five samples (swabs). Internal positive control (IPC) was positive in all samples. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in ready-to-eat food without incubation.

  7. Thermal modeling of step-out targets at the Soda Lake geothermal field, Churchill County, Nevada

    Science.gov (United States)

    Dingwall, Ryan Kenneth

    Temperature data at the Soda Lake geothermal field in the southeastern Carson Sink, Nevada, highlight an intense thermal anomaly. The geothermal field produces roughly 11 MWe from two power producing facilities which are rated to 23 MWe. The low output is attributed to the inability to locate and produce sufficient volumes of fluid at adequate temperature. Additionally, the current producing area has experienced declining production temperatures over its 40 year history. Two step-out targets adjacent to the main field have been identified that have the potential to increase production and extend the life of the field. Though shallow temperatures in the two subsidiary areas are significantly less than those found within the main anomaly, measurements in deeper wells (>1,000 m) show that temperatures viable for utilization are present. High-pass filtering of the available complete Bouguer gravity data indicates that geothermal flow is present within the shallow sediments of the two subsidiary areas. Significant faulting is observed in the seismic data in both of the subsidiary areas. These structures are highlighted in the seismic similarity attribute calculated as part of this study. One possible conceptual model for the geothermal system(s) at the step-out targets indicated upflow along these faults from depth. In order to test this hypothesis, three-dimensional computer models were constructed in order to observe the temperatures that would result from geothermal flow along the observed fault planes. Results indicate that the observed faults are viable hosts for the geothermal system(s) in the step-out areas. Subsequently, these faults are proposed as targets for future exploration focus and step-out drilling.

  8. Structural comparison of anodic nanoporous-titania fabricated from single-step and three-step of anodization using two paralleled-electrodes anodizing cell

    Directory of Open Access Journals (Sweden)

    Mallika Thabuot

    2016-02-01

    Full Text Available Anodization of Ti sheet in the ethylene glycol electrolyte containing 0.38wt% NH4F with the addition of 1.79wt% H2O at room temperature was studied. Applied potential of 10-60 V and anodizing time of 1-3 h were conducted by single-step and three-step of anodization within the two paralleled-electrodes anodizing cell. Their structural and textural properties were investigated by X-ray diffraction (XRD and scanning electron microscopy (SEM. After annealing at 600°C in the air furnace for 3 h, TiO2-nanotubes was transformed to the higher proportion of anatase crystal phase. Also crystallization of anatase phase was enhanced as the duration of anodization as the final step increased. By using single-step of anodization, pore texture of oxide film was started to reveal at the applied potential of 30 V. Better orderly arrangement of the TiO2-nanotubes array with larger pore size was obtained with the increase of applied potential. The applied potential of 60 V was selected for the three-step of anodization with anodizing time of 1-3 h. Results showed that the well-smooth surface coverage with higher density of porous-TiO2 was achieved using prolonging time at the first and second step, however, discontinuity tube in length was produced instead of the long-vertical tube. Layer thickness of anodic oxide film depended on the anodizing time at the last step of anodization. More well arrangement of nanostructured-TiO2 was produced using three-step of anodization under 60 V with 3 h for each step.

  9. Continuous versus step-by-step scanning mode of a novel 3D scanner for CyberKnife measurements

    International Nuclear Information System (INIS)

    Al Kafi, M Abdullah; Mwidu, Umar; Moftah, Belal

    2015-01-01

    The purpose of the study is to investigate the continuous versus step-by-step scanning mode of a commercial circular 3D scanner for commissioning measurements of a robotic stereotactic radiosurgery system. The 3D scanner was used for profile measurements in step-by-step and continuous modes with the intent of comparing the two scanning modes for consistency. The profile measurements of in-plane, cross-plane, 15 degree, and 105 degree were performed for both fixed cones and Iris collimators at depth of maximum dose and at 10 cm depth. For CyberKnife field size, penumbra, flatness and symmetry analysis, it was observed that the measurements with continuous mode, which can be up to 6 times faster than step-by-step mode, are comparable and produce scans nearly identical to step-by-step mode. When compared with centered step-by-step mode data, a fully processed continuous mode data gives rise to maximum of 0.50% and 0.60% symmetry and flatness difference respectfully for all the fixed cones and Iris collimators studied. - Highlights: • D scanner for CyberKnife beam data measurements. • Beam data analysis for continuous and step-by-step scan modes. • Faster continuous scanning data are comparable to step-by-step mode scan data.

  10. Compensatory stepping responses in individuals with stroke: a pilot study.

    Science.gov (United States)

    Lakhani, Bimal; Mansfield, Avril; Inness, Elizabeth L; McIlroy, William E

    2011-05-01

    Impaired postural control and a high incidence of falls are commonly observed following stroke. Compensatory stepping responses are critical to reactive balance control. We hypothesize that, following a stroke, individuals with unilateral limb dyscontrol will be faced with the unique challenge of controlling such rapid stepping reactions that may eventually be linked to the high rate of falling. The objectives of this exploratory pilot study were to investigate compensatory stepping in individuals poststroke with regard to: (1) choice of initial stepping limb (paretic or non-paretic); (2) step characteristics; and (3) differences in step characteristics when the initial step is taken with the paretic vs. the non-paretic limb. Four subjects following stroke (38-165 days post) and 11 healthy young adults were recruited. Anterior and posterior perturbations were delivered by using a weight drop system. Force plates recorded centre-of-pressure excursion prior to the onset of stepping and step timing. Of the four subjects, three only attempted to step with their non-paretic limb and one stepped with either limb. Time to foot-off was generally slow, whereas step onset time and swing time were comparable to healthy controls. Two of the four subjects executed multistep responses in every trial, and attempts to force stepping with the paretic limb were unsuccessful in three of the four subjects. Despite high clinical balance scores, these individuals with stroke demonstrated impaired compensatory stepping responses, suggesting that current clinical evaluations might not accurately reflect reactive balance control in this population.

  11. Recovery of forward stepping in spinal cord injured patients does not transfer to untrained backward stepping.

    Science.gov (United States)

    Grasso, Renato; Ivanenko, Yuri P; Zago, Myrka; Molinari, Marco; Scivoletto, Giorgio; Lacquaniti, Francesco

    2004-08-01

    Six spinal cord injured (SCI) patients were trained to step on a treadmill with body-weight support for 1.5-3 months. At the end of training, foot motion recovered the shape and the step-by-step reproducibility that characterize normal gait. They were then asked to step backward on the treadmill belt that moved in the opposite direction relative to standard forward training. In contrast to healthy subjects, who can immediately reverse the direction of walking by time-reversing the kinematic waveforms, patients were unable to step backward. Similarly patients were unable to perform another untrained locomotor task, namely stepping in place on the idle treadmill. Two patients who were trained to step backward for 2-3 weeks were able to develop control of foot motion appropriate for this task. The results show that locomotor improvement does not transfer to untrained tasks, thus supporting the idea of task-dependent plasticity in human locomotor networks.

  12. Implementing a stepped-care approach in primary care: results of a qualitative study

    Directory of Open Access Journals (Sweden)

    Franx Gerdien

    2012-01-01

    Full Text Available Abstract Background Since 2004, 'stepped-care models' have been adopted in several international evidence-based clinical guidelines to guide clinicians in the organisation of depression care. To enhance the adoption of this new treatment approach, a Quality Improvement Collaborative (QIC was initiated in the Netherlands. Methods Alongside the QIC, an intervention study using a controlled before-and-after design was performed. Part of the study was a process evaluation, utilizing semi-structured group interviews, to provide insight into the perceptions of the participating clinicians on the implementation of stepped care for depression into their daily routines. Participants were primary care clinicians, specialist clinicians, and other healthcare staff from eight regions in the Netherlands. Analysis was supported by the Normalisation Process Theory (NPT. Results The introduction of a stepped-care model for depression to primary care teams within the context of a depression QIC was generally well received by participating clinicians. All three elements of the proposed stepped-care model (patient differentiation, stepped-care treatment, and outcome monitoring, were translated and introduced locally. Clinicians reported changes in terms of learning how to differentiate between patient groups and different levels of care, changing antidepressant prescribing routines as a consequence of having a broader treatment package to offer to their patients, and better working relationships with patients and colleagues. A complex range of factors influenced the implementation process. Facilitating factors were the stepped-care model itself, the structured team meetings (part of the QIC method, and the positive reaction from patients to stepped care. The differing views of depression and depression care within multidisciplinary health teams, lack of resources, and poor information systems hindered the rapid introduction of the stepped-care model. The NPT

  13. Hidden discriminative features extraction for supervised high-order time series modeling.

    Science.gov (United States)

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Step-by-Step Model for the Study of the Apriori Algorithm for Predictive Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Grigore ROŞCA

    2015-06-01

    Full Text Available The goal of this paper was to develop an educational oriented application based on the Data Mining Apriori Algorithm which facilitates both the research and the study of data mining by graduate students. The application could be used to discover interesting patterns in the corpus of data and to measure the impact on the speed of execution as a function of problem constraints (value of support and confidence variables or size of the transactional data-base. The paper presents a brief overview of the Apriori Algorithm, aspects about the implementation of the algorithm using a step-by-step process, a discussion of the education-oriented user interface and the process of data mining of a test transactional data base. The impact of some constraints on the speed of the algorithm is also experimentally measured without a systematic review of different approaches to increase execution speed. Possible applications of the implementation, as well as its limits, are briefly reviewed.

  15. Model Checking Real-Time Systems

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  16. Strengthening the working alliance through a clinician's familiarity with the 12-step approach.

    Science.gov (United States)

    Dennis, Cory B; Roland, Brian D; Loneck, Barry M

    2018-01-01

    The working alliance plays an important role in the substance use disorder treatment process. Many substance use disorder treatment providers incorporate the 12-Step approach to recovery into treatment. With the 12-Step approach known among many clients and clinicians, it may well factor into the therapeutic relationship. We investigated how, from the perspective of clients, a clinician's level of familiarity with and in-session time spent on the 12-Step approach might affect the working alliance between clients and clinicians, including possible differences based on a clinician's recovery status. We conducted a secondary study using data from 180 clients and 31 clinicians. Approximately 81% of client participants were male, and approximately 65% of clinician participants were female. We analyzed data with Stata using a population-averaged model. From the perspective of clients with a substance use disorder, clinicians' familiarity with the 12-Step approach has a positive relationship with the working alliance. The client-estimated amount of in-session time spent on the 12-Step approach did not have a statistically significant effect on ratings of the working alliance. A clinician's recovery status did not moderate the relationship between 12-Step familiarity and the working alliance. These results suggest that clinicians can influence, in part, how their clients perceive the working alliance by being familiar with the 12-Step approach. This might be particularly salient for clinicians who provide substance use disorder treatment at agencies that incorporate, on some level, the 12-Step approach to recovery.

  17. Double ionization of atoms by ion impact: two-step models

    Energy Technology Data Exchange (ETDEWEB)

    Fiori, Marcelo [Departamento de Fisica, Universidad Nacional de Salta, Salta (Argentina); Rocha, A B [Instituto de Quimica, Departamento de FIsico-Quimica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 21949-900, RJ (Brazil); Bielschowsky, C E [Instituto de Quimica, Departamento de FIsico-Quimica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 21949-900, RJ (Brazil); Jalbert, Ginette [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, 21941-972, RJ (Brazil); Garibotti, C R [CONICET and Centro Atomico Bariloche, 8400 S. C. Bariloche, RIo Negro (Argentina)

    2006-04-14

    Total cross sections for the double ionization of He and Li atoms by the impact of H{sup +}, He{sup 2+} and Li{sup 3+} are calculated at intermediate and high energies within two-step models. The double ionization of He by the impact of other bare projectiles at a fixed energy is obtained as well. Single ionization probabilities are calculated within the continuum distorted wave -eikonal-initial-state (CDW-EIS) approximation. The required atomic bound and continuum wave functions are evaluated by numerically solving the atomic wave equation with an optimized potential model (OPM). Correlation between events is introduced by considering ion relaxation. The final state electronic correlation is considered by means of the so-called Gamow factor. We compare the transition probabilities resulting from our approach with those resulting from the use of a Rootham-Hartree-Fock initial state and a Coulomb continuum state with an effective charge. We find that the use of OPM waves gives a better agreement with the experimental results than with Coulomb waves.

  18. Effects of walking speed on the step-by-step control of step width.

    Science.gov (United States)

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  19. A stepped-care model of post-disaster child and adolescent mental health service provision

    Directory of Open Access Journals (Sweden)

    Brett M. McDermott

    2014-07-01

    Full Text Available Background: From a global perspective, natural disasters are common events. Published research highlights that a significant minority of exposed children and adolescents develop disaster-related mental health syndromes and associated functional impairment. Consistent with the considerable unmet need of children and adolescents with regard to psychopathology, there is strong evidence that many children and adolescents with post-disaster mental health presentations are not receiving adequate interventions. Objective: To critique existing child and adolescent mental health services (CAMHS models of care and the capacity of such models to deal with any post-disaster surge in clinical demand. Further, to detail an innovative service response; a child and adolescent stepped-care service provision model. Method: A narrative review of traditional CAMHS is presented. Important elements of a disaster response – individual versus community recovery, public health approaches, capacity for promotion and prevention and service reach are discussed and compared with the CAMHS approach. Results: Difficulties with traditional models of care are highlighted across all levels of intervention; from the ability to provide preventative initiatives to the capacity to provide intense specialised posttraumatic stress disorder interventions. In response, our over-arching stepped-care model is advocated. The general response is discussed and details of the three tiers of the model are provided: Tier 1 communication strategy, Tier 2 parent effectiveness and teacher training, and Tier 3 screening linked to trauma-focused cognitive behavioural therapy. Conclusion: In this paper, we argue that traditional CAMHS are not an appropriate model of care to meet the clinical needs of this group in the post-disaster setting. We conclude with suggestions how improved post-disaster child and adolescent mental health outcomes can be achieved by applying an innovative service approach.

  20. First steps towards modelling high burnup effect in UO{sub 2} fuel

    Energy Technology Data Exchange (ETDEWEB)

    O` Carroll, C; Lassmann, K; Laar, J Van De; Walker, C T [CEC Joint Research Centre, Karlsruhe (Germany)

    1997-08-01

    High burnup initiates a process that can lead to major microstructural changes near the edge of the fuel: formation of subgrains, the loss of matrix fission gas and an increase in porosity. A consequence of this, is a decrease of thermal conductivity near the edge of the fuel which may be major implications for the performance of LWR fuels at higher burnup. The mechanism for the changes in grain structure, the apparent depletion of Xe and increase in porosity is associated with the high fission density at the fuel periphery. This is in turn due to the preferential capture of epithermal neutrons in the resonances of {sup 238}U. The new model TUBRNP predicts the radial burnup profile as a function of time together with the radial profile of plutonium. The model has been validated with data from LWR UO{sub 2} fuels with enrichments in the range 2 to 8.25% and burnups between 21 to 75 Gwd/t. It has been reported that at high burnup EPMA measures a sharp decrease in the concentration of Xe near the fuel surface. This loss of Xe is interpreted as a signal that the gas has been swept out of the original grains into pores: this ``missing`` Xe has been measured by XRF. It has been noted experimentally that the restructuring (Xe depletion and changes in grain structure) have an onset threshold local burnup in the region of 70 to 80 GWd/t: a specific value was taken for use in the model. For a given fuel TUBRNP predicts the local burnup profile, and the depth corresponding to the threshold value is taken to be the thickness of the Xe depleted region. The theoretical predictions have been compared with experimental data. The results are presented and should be seen as a first step in the development of a more detailed model of this phenomenon. (author). 22 refs, 9 figs, 2 tabs.

  1. Ozonisation of model compounds as a pretreatment step for the biological wastewater treatment

    International Nuclear Information System (INIS)

    Degen, U.

    1979-11-01

    Biological degradability and toxicity of organic substances are two basic criteria determining their behaviour in natural environment and during the biological treatment of waste waters. In this work oxidation products of model compounds (p-toluenesulfonic acid, benzenesulfonic acid and aniline) generated by ozonation were tested in a two step laboratory plant with activated sludge. The organic oxidation products and the initial compounds were the sole source of carbon for the microbes of the adapted activated sludge. The progress of elimination of the compounds was studied by measuring DOC, COD, UV-spectra of the initial compounds and sulfate. Initial concentrations of the model compounds were 2-4 mmole/1 with 25-75ion of sulfonic acids. As oxidation products of p-toluenesulfonic acid the following compounds were identified and quantitatively measured: methylglyoxal, pyruvic acid, oxalic acid, acetic acid, formic acid and sulfate. With all the various solutions with different concentrations of initial compounds and oxidation products the biological activity in the two step laboratory plant could maintain. p-Toluenesulfonic acid and the oxidation products are biologically degraded. The degradation of p-toluenesulfonic acid is measured by following the increasing of the sulfate concentration after biological treatment. This shows that the elimination of p-toluenesulfonic acid is not an adsorption but a mineralization step. At high p-toluenesulfonic acid concentration and low concentration of oxidation products p-toluenesulfonic acid is eliminated with a high efficiency (4.3 mole/d m 3 = 0.34 kg p-toluenesulfonic acid/d m 3 ). However at high concentration of oxidation products p-toluenesulfonic acid is less degraded. The oxidation products are always degraded with an elimination efficiency of 70%. A high load of biologically degradable oxidation products diminished the elimination efficiency of p-toluenesulfonic acid. (orig.) [de

  2. How many steps/day are enough? for adults

    Directory of Open Access Journals (Sweden)

    Rowe David A

    2011-07-01

    Full Text Available Abstract Physical activity guidelines from around the world are typically expressed in terms of frequency, duration, and intensity parameters. Objective monitoring using pedometers and accelerometers offers a new opportunity to measure and communicate physical activity in terms of steps/day. Various step-based versions or translations of physical activity guidelines are emerging, reflecting public interest in such guidance. However, there appears to be a wide discrepancy in the exact values that are being communicated. It makes sense that step-based recommendations should be harmonious with existing evidence-based public health guidelines that recognize that "some physical activity is better than none" while maintaining a focus on time spent in moderate-to-vigorous physical activity (MVPA. Thus, the purpose of this review was to update our existing knowledge of "How many steps/day are enough?", and to inform step-based recommendations consistent with current physical activity guidelines. Normative data indicate that healthy adults typically take between 4,000 and 18,000 steps/day, and that 10,000 steps/day is reasonable for this population, although there are notable "low active populations." Interventions demonstrate incremental increases on the order of 2,000-2,500 steps/day. The results of seven different controlled studies demonstrate that there is a strong relationship between cadence and intensity. Further, despite some inter-individual variation, 100 steps/minute represents a reasonable floor value indicative of moderate intensity walking. Multiplying this cadence by 30 minutes (i.e., typical of a daily recommendation produces a minimum of 3,000 steps that is best used as a heuristic (i.e., guiding value, but these steps must be taken over and above habitual activity levels to be a true expression of free-living steps/day that also includes recommendations for minimal amounts of time in MVPA. Computed steps/day translations of time in

  3. Comparing the efficacy of metronome beeps and stepping stones to adjust gait: steps to follow!

    Science.gov (United States)

    Bank, Paulina J M; Roerdink, Melvyn; Peper, C E

    2011-03-01

    Acoustic metronomes and visual targets have been used in rehabilitation practice to improve pathological gait. In addition, they may be instrumental in evaluating and training instantaneous gait adjustments. The aim of this study was to compare the efficacy of two cue types in inducing gait adjustments, viz. acoustic temporal cues in the form of metronome beeps and visual spatial cues in the form of projected stepping stones. Twenty healthy elderly (aged 63.2 ± 3.6 years) were recruited to walk on an instrumented treadmill at preferred speed and cadence, paced by either metronome beeps or projected stepping stones. Gait adaptations were induced using two manipulations: by perturbing the sequence of cues and by imposing switches from one cueing type to the other. Responses to these manipulations were quantified in terms of step-length and step-time adjustments, the percentage correction achieved over subsequent steps, and the number of steps required to restore the relation between gait and the beeps or stepping stones. The results showed that perturbations in a sequence of stepping stones were overcome faster than those in a sequence of metronome beeps. In switching trials, switching from metronome beeps to stepping stones was achieved faster than vice versa, indicating that gait was influenced more strongly by the stepping stones than the metronome beeps. Together these results revealed that, in healthy elderly, the stepping stones induced gait adjustments more effectively than did the metronome beeps. Potential implications for the use of metronome beeps and stepping stones in gait rehabilitation practice are discussed.

  4. Aging effect on step adjustments and stability control in visually perturbed gait initiation.

    Science.gov (United States)

    Sun, Ruopeng; Cui, Chuyi; Shea, John B

    2017-10-01

    Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Peak-load pricing in two-step-modeling of power generation and transmission

    International Nuclear Information System (INIS)

    Korunig, Jens-Holger

    2005-01-01

    For the use of electric current transmission and distribution networks, which represent a monopolistic bottleneck, are inevitable. In the context of the liberalization of the current markets vertical separation is possible to create a competitive generator market and separated current transmission and distribution networks, which can made be available to the same conditions to all potential net users. A private, independent network carrier, however, would arrange its net from the interest of profit: He would dimension the net probably smaller and would try to orientate the transmission prices to the actual costs. From the point of view of the system operator as well as from the economic point of view time-dependent transmission prices are preferable at the sight of periodically changing demand. The question, which arises thereby, is, to what extent the property structure affects prices and quantities for the current transmission and which property structure under economical aspects (maximization of the welfare, ...) represents the best. It is to be expected, that with a higher degree of competition a higher efficiency and a larger social welfare are obtained. This is to be analysed in a two-stage model with period-dependent demand and modelling both the power generation and the transmission sector regarding different property structures. Different types of market are assumed both on the production and on the distribution stage. Those market results are examined on their welfare effects. This is to accompany with a realistic modelling above all the network area, which represents further (in contrast to the power generation) a natural monopoly: A natural monopoly has compellingly sub additive cost functions. The net sector (and in a further step also the power generation) is to be modelled with decreasing average and marginal costs (current research task). If one permits each behaviour, an enterprise, which possesses this natural monopoly, will extract monopolist

  6. A stochastic step model of replicative senescence explains ROS production rate in ageing cell populations.

    Directory of Open Access Journals (Sweden)

    Conor Lawless

    Full Text Available Increases in cellular Reactive Oxygen Species (ROS concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells.One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage. However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS. We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence.

  7. A stochastic step model of replicative senescence explains ROS production rate in ageing cell populations.

    Science.gov (United States)

    Lawless, Conor; Jurk, Diana; Gillespie, Colin S; Shanley, Daryl; Saretzki, Gabriele; von Zglinicki, Thomas; Passos, João F

    2012-01-01

    Increases in cellular Reactive Oxygen Species (ROS) concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells.One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage). However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS). We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence.

  8. Verifying Real-Time Systems using Explicit-time Description Methods

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking has been extensively researched in recent years. Many new formalisms with time extensions and tools based on them have been presented. On the other hand, Explicit-Time Description Methods aim to verify real-time systems with general untimed model checkers. Lamport presented an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables for time requirements. This paper proposes a new explicit-time description method with no reliance on global variables. Instead, it uses rendezvous synchronization steps between the Tick process and each system process to simulate time. This new method achieves better modularity and facilitates usage of more complex timing constraints. The two explicit-time description methods are implemented in DIVINE, a well-known distributed-memory model checker. Preliminary experiment results show that our new method, with better modularity, is comparable to Lamport's method with respect to time and memory efficiency.

  9. Finite-element time-domain modeling of electromagnetic data in general dispersive medium using adaptive Padé series

    Science.gov (United States)

    Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin; Zhdanov, Michael S.

    2017-12-01

    The induced polarization (IP) method has been widely used in geophysical exploration to identify the chargeable targets such as mineral deposits. The inversion of the IP data requires modeling the IP response of 3D dispersive conductive structures. We have developed an edge-based finite-element time-domain (FETD) modeling method to simulate the electromagnetic (EM) fields in 3D dispersive medium. We solve the vector Helmholtz equation for total electric field using the edge-based finite-element method with an unstructured tetrahedral mesh. We adopt the backward propagation Euler method, which is unconditionally stable, with semi-adaptive time stepping for the time domain discretization. We use the direct solver based on a sparse LU decomposition to solve the system of equations. We consider the Cole-Cole model in order to take into account the frequency-dependent conductivity dispersion. The Cole-Cole conductivity model in frequency domain is expanded using a truncated Padé series with adaptive selection of the center frequency of the series for early and late time. This approach can significantly increase the accuracy of FETD modeling.

  10. Quantum transport with long-range steps on Watts-Strogatz networks

    Science.gov (United States)

    Wang, Yan; Xu, Xin-Jian

    2016-07-01

    We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.

  11. Generating regionalized neuronal cells from pluripotency, a step-by-step protocol

    Directory of Open Access Journals (Sweden)

    Agnete eKirkeby

    2013-01-01

    Full Text Available Human pluripotent stem cells possess the potential to generate cells for regenerative therapies in patients with neurodegenerative diseases, and constitute an excellent cell source for studying human neural development and disease modeling. Protocols for neural differentiation of human pluripotent stem cells have undergone significant progress during recent years, allowing for rapid and synchronized neural conversion. Differentiation procedures can further be combined with accurate and efficient positional patterning to yield regionalized neural progenitors and subtype-specific neurons corresponding to different parts of the developing human brain. Here, we present a step-by-step protocol for neuralization and regionalization of human pluripotent cells for transplantation studies or in vitro analysis.

  12. Data-based control of a multi-step forming process

    Science.gov (United States)

    Schulte, R.; Frey, P.; Hildenbrand, P.; Vogel, M.; Betz, C.; Lechner, M.; Merklein, M.

    2017-09-01

    The fourth industrial revolution represents a new stage in the organization and management of the entire value chain. However, concerning the field of forming technology, the fourth industrial revolution has only arrived gradually until now. In order to make a valuable contribution to the digital factory the controlling of a multistage forming process was investigated. Within the framework of the investigation, an abstracted and transferable model is used to outline which data have to be collected, how an interface between the different forming machines can be designed tangible and which control tasks must be fulfilled. The goal of this investigation was to control the subsequent process step based on the data recorded in the first step. The investigated process chain links various metal forming processes, which are typical elements of a multi-step forming process. Data recorded in the first step of the process chain is analyzed and processed for an improved process control of the subsequent process. On the basis of the gained scientific knowledge, it is possible to make forming operations more robust and at the same time more flexible, and thus create the fundament for linking various production processes in an efficient way.

  13. When a Step Is Not a Step! Specificity Analysis of Five Physical Activity Monitors.

    Science.gov (United States)

    O'Connell, Sandra; ÓLaighin, Gearóid; Quinlan, Leo R

    2017-01-01

    Physical activity is an essential aspect of a healthy lifestyle for both physical and mental health states. As step count is one of the most utilized measures for quantifying physical activity it is important that activity-monitoring devices be both sensitive and specific in recording actual steps taken and disregard non-stepping body movements. The objective of this study was to assess the specificity of five activity monitors during a variety of prescribed non-stepping activities. Participants wore five activity monitors simultaneously for a variety of prescribed activities including deskwork, taking an elevator, taking a bus journey, automobile driving, washing and drying dishes; functional reaching task; indoor cycling; outdoor cycling; and indoor rowing. Each task was carried out for either a specific duration of time or over a specific distance. Activity monitors tested were the ActivPAL micro™, NL-2000™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2)™, Fitbit One™ and Jawbone UP™. Participants were video-recorded while carrying out the prescribed activities and the false positive step count registered on each activity monitor was obtained and compared to the video. All activity monitors registered a significant number of false positive steps per minute during one or more of the prescribed activities. The Withings™ activity performed best, registering a significant number of false positive steps per minute during the outdoor cycling activity only (P = 0.025). The Jawbone™ registered a significant number of false positive steps during the functional reaching task and while washing and drying dishes, which involved arm and hand movement (P positive steps during the cycling exercises (P positive steps were registered on the activity monitors during the non-stepping activities, the authors conclude that non-stepping physical activities can result in the false detection of steps. This can negatively affect the quantification of physical

  14. Medium- and Long-term Prediction of LOD Change with the Leap-step Autoregressive Model

    Science.gov (United States)

    Liu, Q. B.; Wang, Q. J.; Lei, M. F.

    2015-09-01

    It is known that the accuracies of medium- and long-term prediction of changes of length of day (LOD) based on the combined least-square and autoregressive (LS+AR) decrease gradually. The leap-step autoregressive (LSAR) model is more accurate and stable in medium- and long-term prediction, therefore it is used to forecast the LOD changes in this work. Then the LOD series from EOP 08 C04 provided by IERS (International Earth Rotation and Reference Systems Service) is used to compare the effectiveness of the LSAR and traditional AR methods. The predicted series resulted from the two models show that the prediction accuracy with the LSAR model is better than that from AR model in medium- and long-term prediction.

  15. Determination of the mass transfer limiting step of dye adsorption onto commercial adsorbent by using mathematical models.

    Science.gov (United States)

    Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov

    2014-01-01

    Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.

  16. Time series modeling of pathogen-specific disease probabilities with subsampled data.

    Science.gov (United States)

    Fisher, Leigh; Wakefield, Jon; Bauer, Cici; Self, Steve

    2017-03-01

    Many diseases arise due to exposure to one of multiple possible pathogens. We consider the situation in which disease counts are available over time from a study region, along with a measure of clinical disease severity, for example, mild or severe. In addition, we suppose a subset of the cases are lab tested in order to determine the pathogen responsible for disease. In such a context, we focus interest on modeling the probabilities of disease incidence given pathogen type. The time course of these probabilities is of great interest as is the association with time-varying covariates such as meteorological variables. In this set up, a natural Bayesian approach would be based on imputation of the unsampled pathogen information using Markov Chain Monte Carlo but this is computationally challenging. We describe a practical approach to inference that is easy to implement. We use an empirical Bayes procedure in a first step to estimate summary statistics. We then treat these summary statistics as the observed data and develop a Bayesian generalized additive model. We analyze data on hand, foot, and mouth disease (HFMD) in China in which there are two pathogens of primary interest, enterovirus 71 (EV71) and Coxackie A16 (CA16). We find that both EV71 and CA16 are associated with temperature, relative humidity, and wind speed, with reasonably similar functional forms for both pathogens. The important issue of confounding by time is modeled using a penalized B-spline model with a random effects representation. The level of smoothing is addressed by a careful choice of the prior on the tuning variance. © 2016, The International Biometric Society.

  17. Free Modal Algebras Revisited: The Step-by-Step Method

    NARCIS (Netherlands)

    Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka

    2012-01-01

    We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond

  18. Unexpected perturbations training improves balance control and voluntary stepping times in older adults - a double blind randomized control trial.

    Science.gov (United States)

    Kurz, Ilan; Gimmon, Yoav; Shapiro, Amir; Debi, Ronen; Snir, Yoram; Melzer, Itshak

    2016-03-04

    Falls are common among elderly, most of them occur while slipping or tripping during walking. We aimed to explore whether a training program that incorporates unexpected loss of balance during walking able to improve risk factors for falls. In a double-blind randomized controlled trial 53 community dwelling older adults (age 80.1±5.6 years), were recruited and randomly allocated to an intervention group (n = 27) or a control group (n = 26). The intervention group received 24 training sessions over 3 months that included unexpected perturbation of balance exercises during treadmill walking. The control group performed treadmill walking with no perturbations. The primary outcome measures were the voluntary step execution times, traditional postural sway parameters and Stabilogram-Diffusion Analysis. The secondary outcome measures were the fall efficacy Scale (FES), self-reported late life function (LLFDI), and Performance-Oriented Mobility Assessment (POMA). Compared to control, participation in intervention program that includes unexpected loss of balance during walking led to faster Voluntary Step Execution Times under single (p = 0.002; effect size [ES] =0.75) and dual task (p = 0.003; [ES] = 0.89) conditions; intervention group subjects showed improvement in Short-term Effective diffusion coefficients in the mediolateral direction of the Stabilogram-Diffusion Analysis under eyes closed conditions (p = 0.012, [ES] = 0.92). Compared to control there were no significant changes in FES, LLFDI, and POMA. An intervention program that includes unexpected loss of balance during walking can improve voluntary stepping times and balance control, both previously reported as risk factors for falls. This however, did not transferred to a change self-reported function and FES. ClinicalTrials.gov NCT01439451 .

  19. SYSTEMATIZATION OF THE BASIC STEPS OF THE STEP-AEROBICS

    Directory of Open Access Journals (Sweden)

    Darinka Korovljev

    2011-03-01

    Full Text Available Following the development of the powerful sport industry, in front of us appeared a lot of new opportunities for creating of the new programmes of exercising with certain requisites. One of such programmes is certainly step-aerobics. Step-aerobics can be defined as a type of aerobics consisting of the basic aerobic steps (basic steps applied in exercising on stepper (step bench, with a possibility to regulate its height. Step-aerobics itself can be divided into several groups, depending on the following: type of music, working methods and adopted knowledge of the attendants. In this work, the systematization of the basic steps in step-aerobics was made on the basis of the following criteria: steps origin, number of leg motions in stepping and relating the body support at the end of the step. Systematization of the basic steps of the step-aerobics is quite significant for making a concrete review of the existing basic steps, thus making creation of the step-aerobics lesson easier

  20. Numerical Simulation of Air Entrainment for Flat-Sloped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Bentalha Chakib

    2015-03-01

    Full Text Available Stepped spillway is a good hydraulic structure for energy dissipation because of the large value of the surface roughness. The performance of the stepped spillway is enhanced with the presence of air that can prevent or reduce the cavitation damage. Chanson developed a method to determine the position of the start of air entrainment called inception point. Within this work the inception point is determined by using fluent computational fluid dynamics (CFD where the volume of fluid (VOF model is used as a tool to simulate air-water interaction on the free surface thereby the turbulence closure is derived in the k –ε turbulence standard model, at the same time one-sixth power law distribution of the velocity profile is verified. Also the pressure contours and velocity vectors at the bed surface are determined. The found numerical results agree well with experimental results.

  1. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    Science.gov (United States)

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  2. An explicit marching on-in-time solver for the time domain volume magnetic field integral equation

    KAUST Repository

    Sayed, Sadeed Bin

    2014-07-01

    Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.

  3. An explicit marching on-in-time solver for the time domain volume magnetic field integral equation

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, Huseyin Arda; Bagci, Hakan

    2014-01-01

    Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.

  4. Tunneling time, exit time and exit momentum in strong field tunnel ionization

    International Nuclear Information System (INIS)

    Teeny, Nicolas

    2016-01-01

    Tunnel ionization belongs to the fundamental processes of atomic physics. It is still an open question when does the electron tunnel ionize and how long is the duration of tunneling. In this work we solve the time-dependent Schroedinger equation in one and two dimensions and use ab initio quantum calculations in order to answer these questions. Additionally, we determine the exit momentum of the tunnel ionized electron from first principles. We find out results that are different from the assumptions of the commonly employed two-step model, which assumes that the electron ionizes at the instant of electric field maximum with a zero momentum. After determining the quantum final momentum distribution of tunnel ionized electrons we show that the two-step model fails to predict the correct final momentum. Accordingly we suggest how to correct the two-step model. Furthermore, we determine the instant at which tunnel ionization starts, which turns out to be different from the instant usually assumed. From determining the instant at which it is most probable for the electron to enter the tunneling barrier and the instant at which it exits we determine the most probable time spent under the barrier. Moreover, we apply a quantum clock approach in order to determine the duration of tunnel ionization. From the quantum clock we determine an average tunneling time which is different in magnitude and origin with respect to the most probable tunneling time. By defining a probability distribution of tunneling times using virtual detectors we relate both methods and explain the apparent discrepancy. The results found have in general an effect on the interpretation of experiments that measure the spectra of tunnel ionized electrons, and specifically on the calibration of the so called attoclock experiments, because models with imprecise assumptions are usually employed in order to interpret experimental results.

  5. Tunneling time, exit time and exit momentum in strong field tunnel ionization

    Energy Technology Data Exchange (ETDEWEB)

    Teeny, Nicolas

    2016-10-18

    Tunnel ionization belongs to the fundamental processes of atomic physics. It is still an open question when does the electron tunnel ionize and how long is the duration of tunneling. In this work we solve the time-dependent Schroedinger equation in one and two dimensions and use ab initio quantum calculations in order to answer these questions. Additionally, we determine the exit momentum of the tunnel ionized electron from first principles. We find out results that are different from the assumptions of the commonly employed two-step model, which assumes that the electron ionizes at the instant of electric field maximum with a zero momentum. After determining the quantum final momentum distribution of tunnel ionized electrons we show that the two-step model fails to predict the correct final momentum. Accordingly we suggest how to correct the two-step model. Furthermore, we determine the instant at which tunnel ionization starts, which turns out to be different from the instant usually assumed. From determining the instant at which it is most probable for the electron to enter the tunneling barrier and the instant at which it exits we determine the most probable time spent under the barrier. Moreover, we apply a quantum clock approach in order to determine the duration of tunnel ionization. From the quantum clock we determine an average tunneling time which is different in magnitude and origin with respect to the most probable tunneling time. By defining a probability distribution of tunneling times using virtual detectors we relate both methods and explain the apparent discrepancy. The results found have in general an effect on the interpretation of experiments that measure the spectra of tunnel ionized electrons, and specifically on the calibration of the so called attoclock experiments, because models with imprecise assumptions are usually employed in order to interpret experimental results.

  6. Multi-step magnetization of the Ising model on a Shastry-Sutherland lattice: a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Huang, W C; Huo, L; Tian, G; Qian, H R; Gao, X S; Qin, M H; Liu, J-M

    2012-01-01

    The magnetization behaviors and spin configurations of the classical Ising model on a Shastry-Sutherland lattice are investigated using Monte Carlo simulations, in order to understand the fascinating magnetization plateaus observed in TmB 4 and other rare-earth tetraborides. The simulations reproduce the 1/2 magnetization plateau by taking into account the dipole-dipole interaction. In addition, a narrow 2/3 magnetization step at low temperature is predicted in our simulation. The multi-step magnetization can be understood as the consequence of the competitions among the spin-exchange interaction, the dipole-dipole interaction, and the static magnetic energy.

  7. The electrical resistivity of rough thin films: A model based on electron reflection at discrete step edges

    Science.gov (United States)

    Zhou, Tianji; Zheng, Pengyuan; Pandey, Sumeet C.; Sundararaman, Ravishankar; Gall, Daniel

    2018-04-01

    The effect of the surface roughness on the electrical resistivity of metallic thin films is described by electron reflection at discrete step edges. A Landauer formalism for incoherent scattering leads to a parameter-free expression for the resistivity contribution from surface mound-valley undulations that is additive to the resistivity associated with bulk and surface scattering. In the classical limit where the electron reflection probability matches the ratio of the step height h divided by the film thickness d, the additional resistivity Δρ = √{3 /2 } /(g0d) × ω/ξ, where g0 is the specific ballistic conductance and ω/ξ is the ratio of the root-mean-square surface roughness divided by the lateral correlation length of the surface morphology. First-principles non-equilibrium Green's function density functional theory transport simulations on 1-nm-thick Cu(001) layers validate the model, confirming that the electron reflection probability is equal to h/d and that the incoherent formalism matches the coherent scattering simulations for surface step separations ≥2 nm. Experimental confirmation is done using 4.5-52 nm thick epitaxial W(001) layers, where ω = 0.25-1.07 nm and ξ = 10.5-21.9 nm are varied by in situ annealing. Electron transport measurements at 77 and 295 K indicate a linear relationship between Δρ and ω/(ξd), confirming the model predictions. The model suggests a stronger resistivity size effect than predictions of existing models by Fuchs [Math. Proc. Cambridge Philos. Soc. 34, 100 (1938)], Sondheimer [Adv. Phys. 1, 1 (1952)], Rossnagel and Kuan [J. Vac. Sci. Technol., B 22, 240 (2004)], or Namba [Jpn. J. Appl. Phys., Part 1 9, 1326 (1970)]. It provides a quantitative explanation for the empirical parameters in these models and may explain the recently reported deviations of experimental resistivity values from these models.

  8. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  9. Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction

    Science.gov (United States)

    Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.

    1996-11-01

    Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?

  10. Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation

    International Nuclear Information System (INIS)

    Boer, J. de; Dannhaueser, G.

    1982-01-01

    The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)

  11. Associations between the Objectively Measured Office Environment and Workplace Step Count and Sitting Time: Cross-Sectional Analyses from the Active Buildings Study.

    Science.gov (United States)

    Fisher, Abi; Ucci, Marcella; Smith, Lee; Sawyer, Alexia; Spinney, Richard; Konstantatou, Marina; Marmot, Alexi

    2018-06-01

    Office-based workers spend a large proportion of the day sitting and tend to have low overall activity levels. Despite some evidence that features of the external physical environment are associated with physical activity, little is known about the influence of the spatial layout of the internal environment on movement, and the majority of data use self-report. This study investigated associations between objectively-measured sitting time and activity levels and the spatial layout of office floors in a sample of UK office-based workers. Participants wore activPAL accelerometers for at least three consecutive workdays. Primary outcomes were steps and proportion of sitting time per working hour. Primary exposures were office spatial layout, which was objectively-measured by deriving key spatial variables: 'distance from each workstation to key office destinations', 'distance from participant's workstation to all other workstations', 'visibility of co-workers', and workstation 'closeness'. 131 participants from 10 organisations were included. Fifty-four per cent were female, 81% were white, and the majority had a managerial or professional role (72%) in their organisation. The average proportion of the working hour spent sitting was 0.7 (SD 0.15); participants took on average 444 (SD 210) steps per working hour. Models adjusted for confounders revealed significant negative associations between step count and distance from each workstation to all other office destinations (e.g., B = -4.66, 95% CI: -8.12, -1.12, p office destinations (e.g., B = -6.45, 95% CI: -11.88, -0.41, p office destinations the less they walked, suggesting that changing the relative distance between workstations and other destinations on the same floor may not be the most fruitful target for promoting walking and reducing sitting in the workplace. However, reported effect sizes were very small and based on cross-sectional analyses. The approaches developed in this study could be applied to other

  12. Control Software for Piezo Stepping Actuators

    Science.gov (United States)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  13. Automated Bayesian model development for frequency detection in biological time series

    Directory of Open Access Journals (Sweden)

    Oldroyd Giles ED

    2011-06-01

    Full Text Available Abstract Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and

  14. Automated Bayesian model development for frequency detection in biological time series.

    Science.gov (United States)

    Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J

    2011-06-24

    A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time

  15. Full-Scale Modeling Explaining Large Spatial Variations of Nitrous Oxide Fluxes in a Step-Feed Plug-Flow Wastewater Treatment Reactor.

    Science.gov (United States)

    Ni, Bing-Jie; Pan, Yuting; van den Akker, Ben; Ye, Liu; Yuan, Zhiguo

    2015-08-04

    Nitrous oxide (N2O) emission data collected from wastewater treatment plants (WWTPs) show huge variations between plants and within one plant (both spatially and temporarily). Such variations and the relative contributions of various N2O production pathways are not fully understood. This study applied a previously established N2O model incorporating two currently known N2O production pathways by ammonia-oxidizing bacteria (AOB) (namely the AOB denitrification and the hydroxylamine pathways) and the N2O production pathway by heterotrophic denitrifiers to describe and provide insights into the large spatial variations of N2O fluxes in a step-feed full-scale activated sludge plant. The model was calibrated and validated by comparing simulation results with 40 days of N2O emission monitoring data as well as other water quality parameters from the plant. The model demonstrated that the relatively high biomass specific nitrogen loading rate in the Second Step of the reactor was responsible for the much higher N2O fluxes from this section. The results further revealed the AOB denitrification pathway decreased and the NH2OH oxidation pathway increased along the path of both Steps due to the increasing dissolved oxygen concentration. The overall N2O emission from this step-feed WWTP would be largely mitigated if 30% of the returned sludge were returned to the Second Step to reduce its biomass nitrogen loading rate.

  16. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  17. Analysis of multi-step transitions in spin crossover nanochains

    Energy Technology Data Exchange (ETDEWEB)

    Chiruta, Daniel [GEMaC, Université de Versailles Saint-Quentin-en-Yvelines, CNRS-UVSQ (UMR 8635), 78035 Versailles Cedex (France); LISV, Université de Versailles Saint-Quentin-en-Yvelines, 78140 Velizy (France); Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University, Suceava 720229 (Romania); Linares, Jorge, E-mail: jorge.linares@uvsq.fr [GEMaC, Université de Versailles Saint-Quentin-en-Yvelines, CNRS-UVSQ (UMR 8635), 78035 Versailles Cedex (France); Garcia, Yann, E-mail: yann.garcia@uclouvain.be [Institute of Condensed Matter and Nanosciences, Université Catholique de Louvain, Molecules, Solids and Reactivity (IMCN/MOST), Place Louis Pasteur, 1, 1348 Louvain-la-Neuve (Belgium); Dimian, Mihai [Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University, Suceava 720229 (Romania); Dahoo, Pierre Richard [LATMOS, Université de Versailles-Saint-Quentin-en-Yvelines, CNRS-UPMC-UVSQ (UMR 8190), 78280 Guyancourt (France)

    2014-02-01

    The temperature driven phase transition occurring in spin crossover nanochains has been studied by an Ising-like model considering both short-range and long-range interactions. Various types of spin crossover profiles have been described in this framework, including a novel three-step transition identified in a nanosystem with eight molecules, which is modeled for the first time. A special interest has been also given to stepwise transitions accompanied by two hysteresis loops. The edge and size effects on spin crossover behavior have been investigated in order to get a deeper insight of the underlying mechanisms involved in these unusual spin transitions.

  18. Calibration of an estuarine sediment transport model to sediment fluxes as an intermediate step for simulation of geomorphic evolution

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.

    2009-01-01

    Modeling geomorphic evolution in estuaries is necessary to model the fate of legacy contaminants in the bed sediment and the effect of climate change, watershed alterations, sea level rise, construction projects, and restoration efforts. Coupled hydrodynamic and sediment transport models used for this purpose typically are calibrated to water level, currents, and/or suspended-sediment concentrations. However, small errors in these tidal-timescale models can accumulate to cause major errors in geomorphic evolution, which may not be obvious. Here we present an intermediate step towards simulating decadal-timescale geomorphic change: calibration to estimated sediment fluxes (mass/time) at two cross-sections within an estuary. Accurate representation of sediment fluxes gives confidence in representation of sediment supply to and from the estuary during those periods. Several years of sediment flux data are available for the landward and seaward boundaries of Suisun Bay, California, the landward-most embayment of San Francisco Bay. Sediment flux observations suggest that episodic freshwater flows export sediment from Suisun Bay, while gravitational circulation during the dry season imports sediment from seaward sources. The Regional Oceanic Modeling System (ROMS), a three-dimensional coupled hydrodynamic/sediment transport model, was adapted for Suisun Bay, for the purposes of hindcasting 19th and 20th century bathymetric change, and simulating geomorphic response to sea level rise and climatic variability in the 21st century. The sediment transport parameters were calibrated using the sediment flux data from 1997 (a relatively wet year) and 2004 (a relatively dry year). The remaining years of data (1998, 2002, 2003) were used for validation. The model represents the inter-annual and annual sediment flux variability, while net sediment import/export is accurately modeled for three of the five years. The use of sediment flux data for calibrating an estuarine geomorphic

  19. Mixed models for data from thorough QT studies: part 2. One-step assessment of conditional QT prolongation.

    Science.gov (United States)

    Schall, Robert

    2011-01-01

    We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation. Copyright © 2010 John Wiley & Sons, Ltd.

  20. Comparison between time-step-integration and probabilistic methods in seismic analysis of a linear structure

    International Nuclear Information System (INIS)

    Schneeberger, B.; Breuleux, R.

    1977-01-01

    Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)

  1. Some features of stepped and dart-stepped leaders near the ground in natural negative cloud-to-ground lightning discharges

    Directory of Open Access Journals (Sweden)

    X. Qie

    2002-06-01

    Full Text Available Characteristics of the electric fields produced by stepped and dart-stepped leaders 200 µs just prior to the return strokes during natural negative cloud-to-ground (CG lightning discharges have been analyzed by using data from a broad-band slow antenna system with 0.08 µs time resolution in southeastern China. It has been found that the electric field changes between the last stepped leader and the first return stroke could be classified in three categories. The first type is characterized by a small pulse superimposed on the abrupt beginning of the return stroke, and accounts for 42% of all the cases. The second type accounts for 33.3% and is characterized by relatively smooth electric field changes between the last leader pulse and the following return stroke. The third type accounts for 24.7%, and is characterized by small pulses between the last recognizable leader pulse and the following return stroke. On the average, the time interval between the successive leader pulses prior to the first return strokes and subsequent return strokes was 15.8 µs and 9.4 µs, respectively. The distribution of time intervals between successive stepped leader pulses is quite similar to Gaussian distribution while that for dart-stepped leader pulses is more similar to a log-normal distribution. Other discharge features, such as the average time interval between the last leader step and the first return stroke peak, the ratio of the last leader pulse peak to that of the return stroke amplitude are also discussed in the paper.Key words. Meteology and atmospheric dynamics (atmospheric electricity; lightning – Radio science (electromagnetic noise and interference

  2. Some features of stepped and dart-stepped leaders near the ground in natural negative cloud-to-ground lightning discharges

    Directory of Open Access Journals (Sweden)

    X. Qie

    Full Text Available Characteristics of the electric fields produced by stepped and dart-stepped leaders 200 µs just prior to the return strokes during natural negative cloud-to-ground (CG lightning discharges have been analyzed by using data from a broad-band slow antenna system with 0.08 µs time resolution in southeastern China. It has been found that the electric field changes between the last stepped leader and the first return stroke could be classified in three categories. The first type is characterized by a small pulse superimposed on the abrupt beginning of the return stroke, and accounts for 42% of all the cases. The second type accounts for 33.3% and is characterized by relatively smooth electric field changes between the last leader pulse and the following return stroke. The third type accounts for 24.7%, and is characterized by small pulses between the last recognizable leader pulse and the following return stroke. On the average, the time interval between the successive leader pulses prior to the first return strokes and subsequent return strokes was 15.8 µs and 9.4 µs, respectively. The distribution of time intervals between successive stepped leader pulses is quite similar to Gaussian distribution while that for dart-stepped leader pulses is more similar to a log-normal distribution. Other discharge features, such as the average time interval between the last leader step and the first return stroke peak, the ratio of the last leader pulse peak to that of the return stroke amplitude are also discussed in the paper.

    Key words. Meteology and atmospheric dynamics (atmospheric electricity; lightning – Radio science (electromagnetic noise and interference

  3. Time response measurements of Rosemount Pressure Transmitters (model 3154) of Angra-1 power plant

    International Nuclear Information System (INIS)

    Santos, Roberto Carlos dos; Pereira, Iraci Martinez; Justino, Marcelo C.; Silva, Marcos C.

    2017-01-01

    This paper shows the Response of time five Rosemount model 3154N pressure transmitter from the Angra I Nuclear Power Plant. The tests were performed using the Hydraulic Ramp and Pressure Step Generator from the Sensor Response Time Measurement laboratory of CEN - Nuclear Engineering Center of IPEN. For each transmitter, damping was adjusted so that the time constant was less than or equal to 500 ms. This value has been determined so that the total value of the protection chain response time does not exceed the established maximum value of 2 seconds. For each transmitter ten tests were performed, obtaining mean values of time constant of 499.7 ms, 464.1 ms, 473.8 ms, 484.7 ms and 511.5 ms, with mean deviations 0.85%, 0.24%, 0.97%, 1.26% and 0.64% respectively. (author)

  4. Time response measurements of Rosemount Pressure Transmitters (model 3154) of Angra-1 power plant

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Roberto Carlos dos; Pereira, Iraci Martinez [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Justino, Marcelo C.; Silva, Marcos C., E-mail: rcsantos@ipen.br, E-mail: justino@eletronuclear.gov.br [Eletrobrás Termonuclear S.A. (ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    This paper shows the Response of time five Rosemount model 3154N pressure transmitter from the Angra I Nuclear Power Plant. The tests were performed using the Hydraulic Ramp and Pressure Step Generator from the Sensor Response Time Measurement laboratory of CEN - Nuclear Engineering Center of IPEN. For each transmitter, damping was adjusted so that the time constant was less than or equal to 500 ms. This value has been determined so that the total value of the protection chain response time does not exceed the established maximum value of 2 seconds. For each transmitter ten tests were performed, obtaining mean values of time constant of 499.7 ms, 464.1 ms, 473.8 ms, 484.7 ms and 511.5 ms, with mean deviations 0.85%, 0.24%, 0.97%, 1.26% and 0.64% respectively. (author)

  5. Big Data impacts on stochastic Forecast Models: Evidence from FX time series

    Directory of Open Access Journals (Sweden)

    Sebastian Dietz

    2013-12-01

    Full Text Available With the rise of the Big Data paradigm new tasks for prediction models appeared. In addition to the volume problem of such data sets nonlinearity becomes important, as the more detailed data sets contain also more comprehensive information, e.g. about non regular seasonal or cyclical movements as well as jumps in time series. This essay compares two nonlinear methods for predicting a high frequency time series, the USD/Euro exchange rate. The first method investigated is Autoregressive Neural Network Processes (ARNN, a neural network based nonlinear extension of classical autoregressive process models from time series analysis (see Dietz 2011. Its advantage is its simple but scalable time series process model architecture, which is able to include all kinds of nonlinearities based on the universal approximation theorem of Hornik, Stinchcombe and White 1989 and the extensions of Hornik 1993. However, restrictions related to the numeric estimation procedures limit the flexibility of the model. The alternative is a Support Vector Machine Model (SVM, Vapnik 1995. The two methods compared have different approaches of error minimization (Empirical error minimization at the ARNN vs. structural error minimization at the SVM. Our new finding is, that time series data classified as “Big Data” need new methods for prediction. Estimation and prediction was performed using the statistical programming language R. Besides prediction results we will also discuss the impact of Big Data on data preparation and model validation steps. Normal 0 21 false false false DE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

  6. Effect of different air-drying time on the microleakage of single-step self-etch adhesives

    Directory of Open Access Journals (Sweden)

    Horieh Moosavi

    2013-05-01

    Full Text Available Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO, Clearfil S3 Bond (CSB, Bond Force (BF. Each main group divided into three subgroups regarding the air-drying time: without application of air stream, following the manufacturer's instruction, for 10 sec more than manufacturer's instruction. After completion of restorations, specimens were thermocycled and then connected to a fluid filtration system to evaluate microleakage. The data were statistically analyzed using two-way ANOVA and Tukey-test (α = 0.05. Results The microleakage of all adhesives decreased when the air-drying time increased from 0 sec to manufacturer's instruction (p < 0.001. The microleakage of BF reached its lowest values after increasing the drying time to 10 sec more than the manufacturer's instruction (p < 0.001. Microleakage of OBAO and CSB was significantly lower compared to BF in all three drying time (p < 0.001. Conclusions Increasing in air-drying time of adhesive layer in one-step self-etch adhesives caused reduction of microleakage, but the amount of this reduction may be dependent on the adhesive components of self-etch adhesives.

  7. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  8. A Novel Molten Salt Reactor Concept to Implement the Multi-Step Time-Scheduled Transmutation Strategy

    International Nuclear Information System (INIS)

    Csom, Gyula; Feher, Sandor; Szieberthj, Mate

    2002-01-01

    Nowadays the molten salt reactor (MSR) concept seems to revive as one of the most promising systems for the realization of transmutation. In the molten salt reactors and subcritical systems the fuel and material to be transmuted circulate dissolved in some molten salt. The main advantage of this reactor type is the possibility of the continuous feed and reprocessing of the fuel. In the present paper a novel molten salt reactor concept is introduced and its transmutation capabilities are studied. The goal is the development of a transmutation technique along with a device implementing it, which yield higher transmutation efficiencies than that of the known procedures and thus results in radioactive waste whose load on the environment is reduced both in magnitude and time length. The procedure is the multi-step time-scheduled transmutation, in which transformation is done in several consecutive steps of different neutron flux and spectrum. In the new MSR concept, named 'multi-region' MSR (MRMSR), the primary circuit is made up of a few separate loops, in which salt-fuel mixtures of different compositions are circulated. The loop sections constituting the core region are only neutronically and thermally coupled. This new concept makes possible the utilization of the spatial dependence of spectrum as well as the advantageous features of liquid fuel such as the possibility of continuous chemical processing etc. In order to compare a 'conventional' MSR and a proposed MRMSR in terms of efficiency, preliminary calculational results are shown. Further calculations in order to find the optimal implementation of this new concept and to emphasize its other advantageous features are going on. (authors)

  9. Modeling and Simulation of Out of Step Blocking Relay for

    Directory of Open Access Journals (Sweden)

    Ahmed A. Al Adwani

    2013-05-01

    Full Text Available  This paper investigates a power swing effect on a distance protection relay performance installed on (HV/EHV transmission line as well as power system stability. A conventional distance relay can’t properly operate under transient stability conditions; therefore, it cause mol-operation, and it will adversely impact on its trip signals. To overcome this problem, the Out Of Step (OOS relay has modeled and simulated to joint with distance relay to supervise and control on its trip signals response. The setting characteristics technique of the OOS based on concentric polygons scheme method to detect power swing under transient stability situation.         This study ia a modeling and simulating using (Maltab\\ Simulink software. A Two relays had been  performed   and  tested  with  two equivalents network connected to ends.      The results of this study showed an activity and reliability of this way to control the distance relay response under a transient stability conditions and it indicated the possibility to find out faults which may occur at period of power swing

  10. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-11-01

    The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated

  11. Discrete-time control system design with applications

    CERN Document Server

    Rabbath, C A

    2014-01-01

    This book presents practical techniques of discrete-time control system design. In general, the design techniques lead to low-order dynamic compensators that ensure satisfactory closed-loop performance for a wide range of sampling rates. The theory is given in the form of theorems, lemmas, and propositions. The design of the control systems is presented as step-by-step procedures and algorithms. The proposed feedback control schemes are applied to well-known dynamic system models. This book also discusses: Closed-loop performance of generic models of mobile robot and airborne pursuer dynamic systems under discrete-time feedback control with limited computing capabilities Concepts of discrete-time models and sampled-data models of continuous-time systems, for both single- and dual-rate operation Local versus global digital redesign Optimal, closed-loop digital redesign methods Plant input mapping design Generalized holds and samplers for use in feedback control loops, Numerical simulation of fixed-point arithm...

  12. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Science.gov (United States)

    McDowell, Ian C; Manandhar, Dinesh; Vockley, Christopher M; Schmid, Amy K; Reddy, Timothy E; Engelhardt, Barbara E

    2018-01-01

    Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP), which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  13. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Directory of Open Access Journals (Sweden)

    Ian C McDowell

    2018-01-01

    Full Text Available Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP, which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  14. When a Step Is Not a Step! Specificity Analysis of Five Physical Activity Monitors.

    Directory of Open Access Journals (Sweden)

    Sandra O'Connell

    Full Text Available Physical activity is an essential aspect of a healthy lifestyle for both physical and mental health states. As step count is one of the most utilized measures for quantifying physical activity it is important that activity-monitoring devices be both sensitive and specific in recording actual steps taken and disregard non-stepping body movements. The objective of this study was to assess the specificity of five activity monitors during a variety of prescribed non-stepping activities.Participants wore five activity monitors simultaneously for a variety of prescribed activities including deskwork, taking an elevator, taking a bus journey, automobile driving, washing and drying dishes; functional reaching task; indoor cycling; outdoor cycling; and indoor rowing. Each task was carried out for either a specific duration of time or over a specific distance. Activity monitors tested were the ActivPAL micro™, NL-2000™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2™, Fitbit One™ and Jawbone UP™. Participants were video-recorded while carrying out the prescribed activities and the false positive step count registered on each activity monitor was obtained and compared to the video.All activity monitors registered a significant number of false positive steps per minute during one or more of the prescribed activities. The Withings™ activity performed best, registering a significant number of false positive steps per minute during the outdoor cycling activity only (P = 0.025. The Jawbone™ registered a significant number of false positive steps during the functional reaching task and while washing and drying dishes, which involved arm and hand movement (P < 0.01 for both. The ActivPAL™ registered a significant number of false positive steps during the cycling exercises (P < 0.001 for both.As a number of false positive steps were registered on the activity monitors during the non-stepping activities, the authors conclude that non-stepping

  15. Microsoft® SQL Server® 2008 MDX Step by Step

    CERN Document Server

    Smith, Bryan; Consulting, Hitachi

    2009-01-01

    Teach yourself the Multidimensional Expressions (MDX) query language-one step at a time. With this practical, learn-by-doing tutorial, you'll build the core techniques for using MDX with Analysis Services to deliver high-performance business intelligence solutions. Discover how to: Construct and execute MDX queriesWork with tuples, sets, and expressionsBuild complex sets to retrieve the exact data users needPerform aggregation functions and navigate data hierarchiesAssemble time-based business metricsCustomize an Analysis Services cube through the MDX scriptImplement dynamic security to cont

  16. Effect of step width manipulation on tibial stress during running.

    Science.gov (United States)

    Meardon, Stacey A; Derrick, Timothy R

    2014-08-22

    Narrow step width has been linked to variables associated with tibial stress fracture. The purpose of this study was to evaluate the effect of step width on bone stresses using a standardized model of the tibia. 15 runners ran at their preferred 5k running velocity in three running conditions, preferred step width (PSW) and PSW±5% of leg length. 10 successful trials of force and 3-D motion data were collected. A combination of inverse dynamics, musculoskeletal modeling and beam theory was used to estimate stresses applied to the tibia using subject-specific anthropometrics and motion data. The tibia was modeled as a hollow ellipse. Multivariate analysis revealed that tibial stresses at the distal 1/3 of the tibia differed with step width manipulation (p=0.002). Compression on the posterior and medial aspect of the tibia was inversely related to step width such that as step width increased, compression on the surface of tibia decreased (linear trend p=0.036 and 0.003). Similarly, tension on the anterior surface of the tibia decreased as step width increased (linear trend p=0.029). Widening step width linearly reduced shear stress at all 4 sites (pstresses experienced by the tibia during running were influenced by step width when using a standardized model of the tibia. Wider step widths were generally associated with reduced loading of the tibia and may benefit runners at risk of or experiencing stress injury at the tibia, especially if they present with a crossover running style. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    Science.gov (United States)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  18. Continuous-Time Random Walk with multi-step memory: an application to market dynamics

    Science.gov (United States)

    Gubiec, Tomasz; Kutner, Ryszard

    2017-11-01

    An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  19. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    Science.gov (United States)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  20. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  1. Evaluating Web-Scale Discovery Services: A Step-by-Step Guide

    Directory of Open Access Journals (Sweden)

    Joseph Deodato

    2015-06-01

    Full Text Available Selecting a web-scale discovery service is a large and important undertaking that involves a significant investment of time, staff, and resources. Finding the right match begins with a thorough and carefully planned evaluation process. In order to be successful, this process should be inclusive, goal-oriented, data-driven, user-centered, and transparent. The following article offers a step-by-step guide for developing a web-scale discovery evaluation plan rooted in these five key principles based on best practices synthesized from the literature as well as the author’s own experiences coordinating the evaluation process at Rutgers University. The goal is to offer academic libraries that are considering acquiring a web-scale discovery service a blueprint for planning a structured and comprehensive evaluation process.

  2. A model of negotiation scenarios based on time, relevance andcontrol used to define advantageous positions in a negotiation

    Directory of Open Access Journals (Sweden)

    Omar Guillermo Rojas Altamirano

    2016-04-01

    Full Text Available Models that apply to negotiation are based on different perspectives that range from the relationship between the actors, game theory or the steps in a procedure. This research proposes a model of negotiation scenarios that considers three factors (time, relevance and control, which are displayed as the most important in a negotiation. These factors interact with each other and create different scenarios for each of the actors involved in a negotiation. The proposed model not only facilitates the creation of a negotiation strategy but also an ideal choice of effective tactics.

  3. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    Science.gov (United States)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  4. Implementation of a variable-step integration technique for nonlinear structural dynamic analysis

    International Nuclear Information System (INIS)

    Underwood, P.; Park, K.C.

    1977-01-01

    The paper presents the implementation of a recently developed unconditionally stable implicit time integration method into a production computer code for the transient response analysis of nonlinear structural dynamic systems. The time integrator is packaged with two significant features; a variable step size that is automatically determined and this is accomplished without additional matrix refactorizations. The equations of motion solved by the time integrator must be cast in the pseudo-force form, and this provides the mechanism for controlling the step size. Step size control is accomplished by extrapolating the pseudo-force to the next time (the predicted pseudo-force), then performing the integration step and then recomputing the pseudo-force based on the current solution (the correct pseudo-force); from this data an error norm is constructed, the value of which determines the step size for the next step. To avoid refactoring the required matrix with each step size change a matrix scaling technique is employed, which allows step sizes to change by a factor of 100 without refactoring. If during a computer run the integrator determines it can run with a step size larger than 100 times the original minimum step size, the matrix is refactored to take advantage of the larger step size. The strategy for effecting these features are discussed in detail. (Auth.)

  5. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    Directory of Open Access Journals (Sweden)

    Mona Hassan Aburahma

    2015-07-01

    Full Text Available Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated.

  6. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    OpenAIRE

    Tudor-Locke, Catrine; Craig, Cora L; Cameron, Christine; Griffiths, Joseph M

    2011-01-01

    Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI) and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years) were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents...

  7. Improving Genetic Evaluation of Litter Size Using a Single-step Model

    DEFF Research Database (Denmark)

    Guo, Xiangyu; Christensen, Ole Fredslund; Ostersen, Tage

    A recently developed single-step method allows genetic evaluation based on information from phenotypes, pedigree and markers simultaneously. This paper compared reliabilities of predicted breeding values obtained from single-step method and the traditional pedigree-based method for two litter size...... traits, total number of piglets born (TNB), and litter size at five days after birth (Ls 5) in Danish Landrace and Yorkshire pigs. The results showed that the single-step method combining phenotypic and genotypic information provided more accurate predictions than the pedigree-based method, not only...

  8. Time Domain Analysis of Graphene Nanoribbon Interconnects Based on Transmission Line ‎Model

    Directory of Open Access Journals (Sweden)

    S. Haji Nasiri

    2012-03-01

    Full Text Available Time domain analysis of multilayer graphene nanoribbon (MLGNR interconnects, based on ‎transmission line modeling (TLM using a six-order linear parametric expression, has been ‎presented for the first time. We have studied the effects of interconnect geometry along with ‎its contact resistance on its step response and Nyquist stability. It is shown that by increasing ‎interconnects dimensions their propagation delays are increased and accordingly the system ‎becomes relatively more stable. In addition, we have compared time responses and Nyquist ‎stabilities of MLGNR and SWCNT bundle interconnects, with the same external dimensions. ‎The results show that under the same conditions, the propagation delays for MLGNR ‎interconnects are smaller than those of SWCNT bundle interconnects are. Hence, SWCNT ‎bundle interconnects are relatively more stable than their MLGNR rivals.‎

  9. Modeling nonstationarity in space and time.

    Science.gov (United States)

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  10. Radiation environmental real-time monitoring and dispersion modeling

    International Nuclear Information System (INIS)

    Kovacik, A.; Bartokova, I.; Omelka, J.; Melicherova, T.

    2014-01-01

    The system of real-time radiation monitoring provided by MicroStep-MIS is a turn-key solution for measurement, acquisition, processing, reporting, archiving and displaying of various radiation data. At the level of measurements, the monitoring stations can be equipped with various devices from radiation probes, measuring the actual ambient gamma dose rate, to fully automated aerosol monitors, returning analysis results of natural and manmade radionuclides concentrations in the air. Using data gathered by our radiation probes RPSG-05 integrated into monitoring network of Crisis Management of the Slovak Republic and into monitoring network of Slovak Hydrometeorological Institute, we demonstrate its reliability and long-term stability of measurements. Data from RPSG-05 probes and GammaTracer probes, both of these types are used in the SHI network, are compared. The sensitivity of RPSG-05 is documented on data where changes of dose rate are caused by precipitation. Qualities of RPSG-05 probe are illustrated also on example of its use in radiation monitoring network in the United Arab Emirates. A more detailed information about radioactivity of the atmosphere can be obtained by using spectrometric detectors (e.g. scintillation detectors) which, besides gamma dose rate values, offer also a possibility to identify different radionuclides. However, this possibility is limited by technical parameters of detector like energetic resolution and detection efficiency in given geometry of measurement. A clearer information with less doubts can be obtained from aerosol monitors with a built-in silicon detector of alpha and beta particles and with an electrically cooled HPGe detector dedicated for gamma-ray spectrometry, which is performed during the sampling. Data from a complex radiation monitoring network can be used, together with meteorological data, in radiation dispersion model by MicroStep-MIS. This model serves for simulation of atmospheric propagation of radionuclides

  11. Extraction of Human Stepping Pattern Using Acceleration Sensors

    Directory of Open Access Journals (Sweden)

    Toyohira Takayuki

    2017-01-01

    Full Text Available Gait analysis plays an important role in characterizing individuals and each condition and gait analysis systems have been developed using various devices or instruments. However, most systems do not catch synchronous stepping actions between right foot and left foot. For obtaining a precise gait pattern, a synchronous walking sensing system is developed, in which a pair of acceleration and angular velocity sensors are attached to left and right shoes of a walking person and their data are transmitted to a PC through a wireless channel. Walking data from 19 persons of the age of 14 to 20 are acquired for walking analysis. Stepping time diagrams are extracted from the acquired data of right and left foot actions of stepping-off and-on the ground, and the time diagrams distinguish between an ordinary person and a person injured on left leg, and a stepping recovery process of the injured person is shown. Synchronous sensing of stepping action between right foot and left foot contributes to obtain precise stepping patterns.

  12. Three-step approach for prediction of limit cycle pressure oscillations in combustion chambers of gas turbines

    Science.gov (United States)

    Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio

    2017-11-01

    Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.

  13. The step complexity measure for emergency operating procedures: measure verification

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea; Ha, Jaejoo; Park, Changkue

    2002-01-01

    In complex systems, such as nuclear power plants (NPPs) or airplane control systems, human errors play a major role in many accidents. Therefore, to prevent an occurrence of accidents or to ensure system safety, extensive effort has been made to identify significant factors that can cause human errors. According to related studies, written manuals or operating procedures are revealed as one of the most important factors, and the understandability is pointed out as one of the major reasons for procedure-related human errors. Many qualitative checklists are suggested to evaluate emergency operating procedures (EOPs) of NPPs. However, since qualitative evaluations using checklists have some drawbacks, a quantitative measure that can quantify the complexity of EOPs is very necessary to compensate for them. In order to quantify the complexity of steps included in EOPs, Park et al. suggested the step complexity (SC) measure. In addition, to ascertain the appropriateness of the SC measure, averaged step performance time data obtained from emergency training records for the loss of coolant accident and the excess steam dump event were compared with estimated SC scores. Although averaged step performance time data show good correlation with estimated SC scores, conclusions for some important issues that have to be clarified to ensure the appropriateness of the SC measure were not properly drawn because of lack of backup data. In this paper, to clarify remaining issues, additional activities to verify the appropriateness of the SC measure are performed using averaged step performance time data obtained from emergency training records. The total number of available records is 36, and training scenarios are the steam generator tube rupture and the loss of all feedwater. The number of scenarios is 18 each. From these emergency training records, averaged step performance time data for 30 steps are retrieved. As the results, the SC measure shows statistically meaningful

  14. Focal cryotherapy: step by step technique description

    Directory of Open Access Journals (Sweden)

    Cristina Redondo

    Full Text Available ABSTRACT Introduction and objective: Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa. The purpose of this video is to describe the procedure step by step. Materials and methods: We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. Results: The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipment utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40°C to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1–5. Conclusions: Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment.

  15. Step-by-step guide to building an inexpensive 3D printed motorized positioning stage for automated high-content screening microscopy.

    Science.gov (United States)

    Schneidereit, Dominik; Kraus, Larissa; Meier, Jochen C; Friedrich, Oliver; Gilbert, Daniel F

    2017-06-15

    High-content screening microscopy relies on automation infrastructure that is typically proprietary, non-customizable, costly and requires a high level of skill to use and maintain. The increasing availability of rapid prototyping technology makes it possible to quickly engineer alternatives to conventional automation infrastructure that are low-cost and user-friendly. Here, we describe a 3D printed inexpensive open source and scalable motorized positioning stage for automated high-content screening microscopy and provide detailed step-by-step instructions to re-building the device, including a comprehensive parts list, 3D design files in STEP (Standard for the Exchange of Product model data) and STL (Standard Tessellation Language) format, electronic circuits and wiring diagrams as well as software code. System assembly including 3D printing requires approx. 30h. The fully assembled device is light-weight (1.1kg), small (33×20×8cm) and extremely low-cost (approx. EUR 250). We describe positioning characteristics of the stage, including spatial resolution, accuracy and repeatability, compare imaging data generated with our device to data obtained using a commercially available microplate reader, demonstrate its suitability to high-content microscopy in 96-well high-throughput screening format and validate its applicability to automated functional Cl - - and Ca 2+ -imaging with recombinant HEK293 cells as a model system. A time-lapse video of the stage during operation and as part of a custom assembled screening robot can be found at https://vimeo.com/158813199. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Modeling of the steam hydrolysis in a two-step process for hydrogen production by solar concentrated energy

    Science.gov (United States)

    Valle-Hernández, Julio; Romero-Paredes, Hernando; Pacheco-Reyes, Alejandro

    2017-06-01

    In this paper the simulation of the steam hydrolysis for hydrogen production through the decomposition of cerium oxide is presented. The thermochemical cycle for hydrogen production consists of the endothermic reduction of CeO2 to lower-valence cerium oxide, at high temperature, where concentrated solar energy is used as a source of heat; and of the subsequent steam hydrolysis of the resulting cerium oxide to produce hydrogen. The modeling of endothermic reduction step was presented at the Solar Paces 2015. This work shows the modeling of the exothermic step; the hydrolysis of the cerium oxide (III) to form H2 and the corresponding initial cerium oxide made at lower temperature inside the solar reactor. For this model, three sections of the pipe where the reaction occurs were considered; the steam water inlet, the porous medium and the hydrogen outlet produced. The mathematical model describes the fluid mechanics; mass and energy transfer occurring therein inside the tungsten pipe. Thermochemical process model was simulated in CFD. The results show a temperature distribution in the solar reaction pipe and allow obtaining the fluid dynamics and the heat transfer within the pipe. This work is part of the project "Solar Fuels and Industrial Processes" from the Mexican Center for Innovation in Solar Energy (CEMIE-Sol).

  17. Systematic identification and robust control design for uncertain time delay processes

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2011-01-01

    A systematic procedure is proposed to handle the standard process control problem. The considered standard problem involves infrequent step disturbances to processes with large delays and measurement noise. The process is modeled as an ARX model and extended with a suitable noise model in order...... to reject unmeasured step disturbances and unavoidable model errors. This controller is illustrated to perform well for both set point tracking and a disturbance rejection for a SISO process example of a furnace which has a time delay which is significantly longer than the dominating time constant....

  18. Models of human adamantinomatous craniopharyngioma tissue: Steps toward an effective adjuvant treatment.

    Science.gov (United States)

    Hölsken, Annett; Buslei, Rolf

    2017-05-01

    Even though ACP is a benign tumor, treatment is challenging because of the tumor's eloquent location. Today, with the exception of surgical intervention and irradiation, further treatment options are limited. However, ongoing molecular research in this field provides insights into the pathways involved in ACP pathogenesis and reveal a plethora of druggable targets. In the next step, appropriate models are essential to identify the most suitable and effective substances for clinical practice. Primary cell cultures in low passages provide a proper and rapid tool for initial drug potency testing. The patient-derived xenograft (PDX) model accommodates ACP complexity in that it shows respect to the preserved architecture and similar histological appearance to human tumors and therefore provides the most appropriate means for analyzing pharmacological efficacy. Nevertheless, further research is needed to understand in more detail the biological background of ACP pathogenesis, which provides the identification of the best targets in the hierarchy of signaling cascades. ACP models are also important for the continuous testing of new targeting drugs, to establish precision medicine. © 2017 International Society of Neuropathology.

  19. Improving stability of stabilized and multiscale formulations in flow simulations at small time steps

    KAUST Repository

    Hsu, Ming-Chen

    2010-02-01

    The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.

  20. Stepping movement analysis of control rod drive mechanism

    International Nuclear Information System (INIS)

    Xu Yantao; Zu Hongbiao

    2013-01-01

    Background: Control rod drive mechanism (CRDM) is one of the important safety-related equipment for nuclear power plants. Purpose: The operating parameters of stepping movement, including lifting loads, step distance and step velocity, are all critical design targets. Methods: FEA and numerical simulation are used to analyze stepping movement separately. Results: The motion equations of the movable magnet in stepping movement are established by load analysis. Gravitation, magnetic force, fluid resistance and spring force are all in consideration in the load analysis. The operating parameters of stepping movement are given. Conclusions: The results, including time history curves of force, speed and etc, can positively used in the design of CRDM. (authors)

  1. One-step fabrication of multifunctional micromotors

    Science.gov (United States)

    Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y.

    2015-08-01

    Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications. Electronic supplementary information (ESI) available: Videos S1-S4 and Fig. S1-S3. See DOI: 10.1039/c5nr03574k

  2. Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing

    Science.gov (United States)

    Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun

    2016-01-01

    Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps). PMID:27444267

  3. Avoid the tsunami of the Dirac sea in the imaginary time step method

    International Nuclear Information System (INIS)

    Zhang, Ying; Liang, Haozhao; Meng, Jie

    2010-01-01

    The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)

  4. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  5. Webinar Presentation: Environmental Exposures and Health Risks in California Child Care Facilities: First Steps to Improve Environmental Health where Children Spend Time

    Science.gov (United States)

    This presentation, Environmental Exposures and Health Risks in California Child Care Facilities: First Steps to Improve Environmental Health where Children Spend Time, was given at the NIEHS/EPA Children's Centers 2016 Webinar Series: Exposome.

  6. Non-equilibrium coherent vortex states and subharmonic giant Shapiro steps in Josephson junction arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Jose, J.V.; Northeastern Univ., Boston, MA

    1994-01-01

    This is a review of recent work on the dynamic response of Josephson junction arrays driven by dc and ac currents. The arrays are modeled by the resistively shunted Josephson junction model, appropriate for proximity effect junctions, including self-induced magnetic fields as well as disorder. The relevance of the self-induced fields is measured as a function of a parameter κ = λ L /a, with λ L the London penetration depth of the arrays, and a the lattice spacing. The transition from Type II (κ > 1) to Type I (κ < 1) behavior is studied in detail. The authors compare the results for models with self, self + nearest-neighbor, and full inductance matrices. In the κ = ∞ limit, they find that when the initial state has at least one vortex-antivortex pair, after a characteristic transient time these vortices unbind and radiate other vortices. These radiated vortices settle into a parity-broken, time-periodic, axisymmetric coherent vortex state (ACVS), characterized by alternate rows of positive and negative vortices lying along a tilted axis. The ACVS produces subharmonic steps in the current voltage (IV) characteristics, typical of giant Shapiro steps. For finite κ they find that the IV's show subharmonic giant Shapiro steps, even at zero external magnetic field. They find that these subharmonic steps are produced by a whole family of coherent vortex oscillating patterns, with their structure changing as a function of κ. In general, they find that these patterns are due to a breakdown of translational invariance produced, for example, by disorder of antisymmetric edge-fields. The zero field case results are in good qualitative agreement with experiments in Nb-Au-Nb arrays

  7. Time-dependent Networks as Models to Achieve Fast Exact Time-table Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jacob, Rico

    2001-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  8. Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2016-01-01

    Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.

  9. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  10. Willingness-to-pay for steelhead trout fishing: Implications of two-step consumer decisions with short-run endowments

    Science.gov (United States)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2010-09-01

    Choice of the appropriate model of economic behavior is important for the measurement of nonmarket demand and benefits. Several travel cost demand model specifications are currently in use. Uncertainty exists over the efficacy of these approaches, and more theoretical and empirical study is warranted. Thus travel cost models with differing assumptions about labor markets and consumer behavior were applied to estimate the demand for steelhead trout sportfishing on an unimpounded reach of the Snake River near Lewiston, Idaho. We introduce a modified two-step decision model that incorporates endogenous time value using a latent index variable approach. The focus is on the importance of distinguishing between short-run and long-run consumer decision variables in a consistent manner. A modified Barnett two-step decision model was found superior to other models tested.

  11. Step-by-Step Simulation of Radiation Chemistry Using Green Functions for Diffusion-Influenced Reactions

    Science.gov (United States)

    Plante, Ianik; Cucinotta, Francis A.

    2011-01-01

    Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.

  12. Comparative analysis of single-step and two-step biodiesel production using supercritical methanol on laboratory-scale

    International Nuclear Information System (INIS)

    Micic, Radoslav D.; Tomić, Milan D.; Kiss, Ferenc E.; Martinovic, Ferenc L.; Simikić, Mirko Ð.; Molnar, Tibor T.

    2016-01-01

    Highlights: • Single-step supercritical transesterification compared to the two-step process. • Two-step process: oil hydrolysis and subsequent supercritical methyl esterification. • Experiments were conducted in a laboratory-scale batch reactor. • Higher biodiesel yields in two-step process at milder reaction conditions. • Two-step process has potential to be cost-competitive with the single-step process. - Abstract: Single-step supercritical transesterification and two-step biodiesel production process consisting of oil hydrolysis and subsequent supercritical methyl esterification were studied and compared. For this purpose, comparative experiments were conducted in a laboratory-scale batch reactor and optimal reaction conditions (temperature, pressure, molar ratio and time) were determined. Results indicate that in comparison to a single-step transesterification, methyl esterification (second step of the two-step process) produces higher biodiesel yields (95 wt% vs. 91 wt%) at lower temperatures (270 °C vs. 350 °C), pressures (8 MPa vs. 12 MPa) and methanol to oil molar ratios (1:20 vs. 1:42). This can be explained by the fact that the reaction system consisting of free fatty acid (FFA) and methanol achieves supercritical condition at milder reaction conditions. Furthermore, the dissolved FFA increases the acidity of supercritical methanol and acts as an acid catalyst that increases the reaction rate. There is a direct correlation between FFA content of the product obtained in hydrolysis and biodiesel yields in methyl esterification. Therefore, the reaction parameters of hydrolysis were optimized to yield the highest FFA content at 12 MPa, 250 °C and 1:20 oil to water molar ratio. Results of direct material and energy costs comparison suggest that the process based on the two-step reaction has the potential to be cost-competitive with the process based on single-step supercritical transesterification. Higher biodiesel yields, similar or lower energy

  13. Dynamic RSA for the evaluation of inducible micromotion of Oxford UKA during step-up and step-down motion.

    Science.gov (United States)

    Horsager, Kristian; Kaptein, Bart L; Rømer, Lone; Jørgensen, Peter B; Stilling, Maiken

    2017-06-01

    Background and purpose - Implant inducible micromotions have been suggested to reflect the quality of the fixation interface. We investigated the usability of dynamic RSA for evaluation of inducible micromotions of the Oxford Unicompartmental Knee Arthroplasty (UKA) tibial component, and evaluated factors that have been suggested to compromise the fixation, such as fixation method, component alignment, and radiolucent lines (RLLs). Patients and methods - 15 patients (12 men) with a mean age of 69 (55-86) years, with an Oxford UKA (7 cemented), were studied after a mean time in situ of 4.4 (3.6-5.1) years. 4 had tibial RLLs. Each patient was recorded with dynamic RSA (10 frames/second) during a step-up/step-down motion. Inducible micromotions were calculated for the tibial component with respect to the tibia bone. Postoperative component alignment was measured with model-based RSA and RLLs were measured on screened radiographs. Results - All tibial components showed inducible micromotions as a function of the step-cycle motion with a mean subsidence of up to -0.06 mm (95% CI: -0.10 to -0.03). Tibial component inducible micromotions were similar for cemented fixation and cementless fixation. Patients with tibial RLLs had 0.5° (95% CI: 0.18-0.81) greater inducible medio-lateral tilt of the tibial component. There was a correlation between postoperative posterior slope of the tibial plateau and inducible anterior-posterior tilt. Interpretation - All patients had inducible micromotions of the tibial component during step-cycle motion. RLLs and a high posterior slope increased the magnitude of inducible micromotions. This suggests that dynamic RSA is a valuable clinical tool for the evaluation of functional implant fixation.

  14. Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio

    International Nuclear Information System (INIS)

    Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik; Suzuki, Mitsutoshi

    2014-01-01

    Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based on the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided

  15. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  16. Steps and dislocations in cubic lyotropic crystals

    International Nuclear Information System (INIS)

    Leroy, S; Pieranski, P

    2006-01-01

    It has been shown recently that lyotropic systems are convenient for studies of faceting, growth or anisotropic surface melting of crystals. All these phenomena imply the active contribution of surface steps and bulk dislocations. We show here that steps can be observed in situ and in real time by means of a new method combining hygroscopy with phase contrast. First results raise interesting issues about the consequences of bicontinuous topology on the structure and dynamical behaviour of steps and dislocations

  17. Two-step rapid sulfur capture. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The primary goal of this program was to test the technical and economic feasibility of a novel dry sorbent injection process called the Two-Step Rapid Sulfur Capture process for several advanced coal utilization systems. The Two-Step Rapid Sulfur Capture process consists of limestone activation in a high temperature auxiliary burner for short times followed by sorbent quenching in a lower temperature sulfur containing coal combustion gas. The Two-Step Rapid Sulfur Capture process is based on the Non-Equilibrium Sulfur Capture process developed by the Energy Technology Office of Textron Defense Systems (ETO/TDS). Based on the Non-Equilibrium Sulfur Capture studies the range of conditions for optimum sorbent activation were thought to be: activation temperature > 2,200 K for activation times in the range of 10--30 ms. Therefore, the aim of the Two-Step process is to create a very active sorbent (under conditions similar to the bomb reactor) and complete the sulfur reaction under thermodynamically favorable conditions. A flow facility was designed and assembled to simulate the temperature, time, stoichiometry, and sulfur gas concentration prevalent in the advanced coal utilization systems such as gasifiers, fluidized bed combustors, mixed-metal oxide desulfurization systems, diesel engines, and gas turbines.

  18. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  19. Spectrum of Slip Processes on the Subduction Interface in a Continuum Framework Resolved by Rate-and State Dependent Friction and Adaptive Time Stepping

    Science.gov (United States)

    Herrendoerfer, R.; van Dinther, Y.; Gerya, T.

    2015-12-01

    To explore the relationships between subduction dynamics and the megathrust earthquake potential, we have recently developed a numerical model that bridges the gap between processes on geodynamic and earthquake cycle time scales. In a self-consistent, continuum-based framework including a visco-elasto-plastic constitutive relationship, cycles of megathrust earthquake-like ruptures were simulated through a purely slip rate-dependent friction, albeit with very low slip rates (van Dinther et al., JGR, 2013). In addition to much faster earthquakes, a range of aseismic slip processes operate at different time scales in nature. These aseismic processes likely accommodate a considerable amount of the plate convergence and are thus relevant in order to estimate the long-term seismic coupling and related hazard in subduction zones. To simulate and resolve this wide spectrum of slip processes, we innovatively implemented rate-and state dependent friction (RSF) and an adaptive time-stepping into our continuum framework. The RSF formulation, in contrast to our previous friction formulation, takes the dependency of frictional strength on a state variable into account. It thereby allows for continuous plastic yielding inside rate-weakening regions, which leads to aseismic slip. In contrast to the conventional RSF formulation, we relate slip velocities to strain rates and use an invariant formulation. Thus we do not require the a priori definition of infinitely thin, planar faults in a homogeneous elastic medium. With this new implementation of RSF, we succeed to produce consistent cycles of frictional instabilities. By changing the frictional parameter a, b, and the characteristic slip distance, we observe a transition from stable sliding to stick-slip behaviour. This transition is in general agreement with predictions from theoretical estimates of the nucleation size, thereby to first order validating our implementation. By incorporating adaptive time-stepping based on a

  20. Stepwise hydrogeological modeling and groundwater flow analysis on site scale (step 2)

    International Nuclear Information System (INIS)

    Onoe, Hironori; Saegusa, Hiromitsu; Endo, Yoshinobu

    2005-02-01

    One of the main goals of the Mizunami Underground Research Laboratory Project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. To achieve this goal, a variety of investigations are being conducted using an iterative approach. In this study, hydrogeological modeling and groundwater flow analyses have been carried out using the data from surface-based investigations at Step 2, in order to synthesize the investigation results, to evaluate the uncertainty of the hydrogeological model, and to specify items for further investigation. The results of this study are summarized as follows: 1) The understanding of groundwater flow is enhanced, and the hydrogeological model has renewed; 2) The importance of faults as major groundwater flow pathways has been demonstrated; 3) The importance of iterative approach as progress of investigations has been demonstrated; 4) Geological and hydraulic characteristics of faults with orientation of NNW, NW and NE were shown to be especially significant; 5) the hydraulic properties of the Lower Sparsely Fractured Domain (LSFD) significantly influence the groundwater flow. The main items specified for further investigations are summarized as follows: 1) Geological and hydraulic characteristics of NNW, NW and NE trending faults; 2) Hydraulic properties of the LSFD; 3) More accuracy upper and lateral boundary conditions of the site scale model. (author)