WorldWideScience

Sample records for model time step

  1. The importance of time-stepping errors in ocean models

    Science.gov (United States)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  2. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.

  3. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  4. A local time stepping algorithm for GPU-accelerated 2D shallow water models

    Science.gov (United States)

    Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo

    2018-01-01

    In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.

  5. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    Science.gov (United States)

    Hoepfer, Matthias

    co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.

  6. Stepping Stones through Time

    Directory of Open Access Journals (Sweden)

    Emily Lyle

    2012-03-01

    Full Text Available Indo-European mythology is known only through written records but it needs to be understood in terms of the preliterate oral-cultural context in which it was rooted. It is proposed that this world was conceptually organized through a memory-capsule consisting of the current generation and the three before it, and that there was a system of alternate generations with each generation taking a step into the future under the leadership of a white or red king.

  7. The G2 erosion model: An algorithm for month-time step assessments.

    Science.gov (United States)

    Karydas, Christos G; Panagos, Panos

    2018-02-01

    A detailed description of the G2 erosion model is presented, in order to support potential users. G2 is a complete, quantitative algorithm for mapping soil loss and sediment yield rates on month-time intervals. G2 has been designed to run in a GIS environment, taking input from geodatabases available by European or other international institutions. G2 adopts fundamental equations from the Revised Universal Soil Loss Equation (RUSLE) and the Erosion Potential Method (EPM), especially for rainfall erosivity, soil erodibility, and sediment delivery ratio. However, it has developed its own equations and matrices for the vegetation cover and management factor and the effect of landscape alterations on erosion. Provision of month-time step assessments is expected to improve understanding of erosion processes, especially in relation to land uses and climate change. In parallel, G2 has full potential to decision-making support with standardised maps on a regular basis. Geospatial layers of rainfall erosivity, soil erodibility, and terrain influence, recently developed by the Joint Research Centre (JRC) on a European or global scale, will further facilitate applications of G2. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Short time step continuous rainfall modeling and simulation of extreme events

    Science.gov (United States)

    Callau Poduje, A. C.; Haberlandt, U.

    2017-09-01

    The design, planning, operation and overall assessment of urban drainage systems require long and continuous rain series in a high temporal resolution. Unfortunately, the availability of this data is usually short. Nevertheless a precipitation model could be used to tackle this shortcoming; therefore it is in the aim of this study to present a stochastic point precipitation model to reproduce average rainfall event properties along with extreme values. For this purpose a model is proposed to generate long synthetic series of rainfall for a temporal resolution of 5 min. It is based on an alternating renewal framework and events are characterized by variables describing durations, amounts and peaks. A group of 24 stations located in the north of Germany is used to set up and test the model. The adequate modeling of joint behaviour of rainfall amount and duration is found to be essential for reproducing the observed properties, especially for the extreme events. Copulas are advantageous tools for modeling these variables jointly; however caution must be taken in the selection of the proper copula. The inclusion of seasonality and small events is as well tested and found to be useful. The model is directly validated by generating long synthetic time series and comparing them with observed ones. An indirect validation is as well performed based on a fictional urban hydrological system. The proposed model is capable of reproducing seasonal behaviour and main characteristics of the rainfall events including extremes along with urban flooding and overflow behaviour. Overall the performance of the model is acceptable compared to the design practice. The proposed model is simple to interpret, fast to implement and to transfer to other regions, whilst showing acceptable results.

  9. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  10. Grief: Difficult Times, Simple Steps.

    Science.gov (United States)

    Waszak, Emily Lane

    This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…

  11. Calibration and Evaluation of Different Estimation Models of Daily Solar Radiation in Seasonally and Annual Time Steps in Shiraz Region

    Directory of Open Access Journals (Sweden)

    Hamid Reza Fooladmand

    2017-06-01

    2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region

  12. 3D elastic wave modeling using modified high‐order time stepping schemes with improved stability conditions

    KAUST Repository

    Chu, Chunlei

    2009-01-01

    We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.

  13. Symplectic integrators with adaptive time steps

    Science.gov (United States)

    Richardson, A. S.; Finn, J. M.

    2012-01-01

    In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.

  14. Development of a time-stepping sediment budget model for assessing land use impacts in large river basins.

    Science.gov (United States)

    Wilkinson, S N; Dougall, C; Kinsey-Henderson, A E; Searle, R D; Ellis, R J; Bartley, R

    2014-01-15

    The use of river basin modelling to guide mitigation of non-point source pollution of wetlands, estuaries and coastal waters has become widespread. To assess and simulate the impacts of alternate land use or climate scenarios on river washload requires modelling techniques that represent sediment sources and transport at the time scales of system response. Building on the mean-annual SedNet model, we propose a new D-SedNet model which constructs daily budgets of fine sediment sources, transport and deposition for each link in a river network. Erosion rates (hillslope, gully and streambank erosion) and fine sediment sinks (floodplains and reservoirs) are disaggregated from mean annual rates based on daily rainfall and runoff. The model is evaluated in the Burdekin basin in tropical Australia, where policy targets have been set for reducing sediment and nutrient loads to the Great Barrier Reef (GBR) lagoon from grazing and cropping land. D-SedNet predicted annual loads with similar performance to that of a sediment rating curve calibrated to monitored suspended sediment concentrations. Relative to a 22-year reference load time series at the basin outlet derived from a dynamic general additive model based on monitoring data, D-SedNet had a median absolute error of 68% compared with 112% for the rating curve. RMS error was slightly higher for D-SedNet than for the rating curve due to large relative errors on small loads in several drought years. This accuracy is similar to existing agricultural system models used in arable or humid environments. Predicted river loads were sensitive to ground vegetation cover. We conclude that the river network sediment budget model provides some capacity for predicting load time-series independent of monitoring data in ungauged basins, and for evaluating the impact of land management on river sediment load time-series, which is challenging across large regions in data-poor environments. © 2013. Published by Elsevier B.V. All rights

  15. Hybrid High-Fidelity Modeling of Radar Scenarios Using Atemporal, Discrete-Event, and Time-Step Simulation

    Science.gov (United States)

    2016-12-01

    simulation display and output produced by Java Simkit program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Figure 4.5 Hybrid...is 22,100 after 100 replications of each valid DP. 88 Figure 4.4. Hybrid-sensor simulation display and output produced by Java Simkit program . 4.4...dots) and test dataset (green dots) vs. training model complexity from low to high. The left green box is the underfitting model while the right blue

  16. Nine time steps: ultra-fast statistical consistency testing of the Community Earth System Model (pyCECT v3.0)

    Science.gov (United States)

    Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.

    2018-02-01

    The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.

  17. Nucleoside uptake in macrophages from various murine strains: a short-time and a two-step stimulation model

    International Nuclear Information System (INIS)

    Busolo, F.; Conventi, L.; Grigolon, M.; Palu, G.

    1991-01-01

    Kinetics of [3H]-uridine uptake by murine peritoneal macrophages (pM phi) is early altered after exposure to a variety of stimuli. Alterations caused by Candida albicans, lipopolysaccharide (LPS) and recombinant interferon-gamma (rIFN-gamma) were similar in SAVO, C57BL/6, C3H/HeN and C3H/HeJ mice, and were not correlated with an activation process as shown by the amount of tumor necrosis factor-alpha (TNF-alpha) being released. Short-time exposure to all stimuli resulted in an increased nucleoside uptake by SAVO pM phi, suggesting that the tumoricidal function of this cell either depends from the type of stimulus or the time when the specific interaction with the cell receptor is taking place. Experiments with priming and triggering signals confirmed the above findings, indicating that the increase or the decrease of nucleoside uptake into the cell depends essentially on the chemical nature of the priming stimulus. The triggering stimulus, on the other hand, is only able to amplify the primary response

  18. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  19. Time to pause before the next step

    International Nuclear Information System (INIS)

    Siemon, R.E.

    1998-01-01

    Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy's development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ''demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...'' which are the important ingredients of ITER's objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here

  20. Time-step coupling for hybrid simulations of multiscale flows

    Science.gov (United States)

    Lockerby, Duncan A.; Duque-Daza, Carlos A.; Borg, Matthew K.; Reese, Jason M.

    2013-03-01

    A new method is presented for the exploitation of time-scale separation in hybrid continuum-molecular models of multiscale flows. Our method is a generalisation of existing approaches, and is evaluated in terms of computational efficiency and physical/numerical error. Comparison with existing schemes demonstrates comparable, or much improved, physical accuracy, at comparable, or far greater, efficiency (in terms of the number of time-step operations required to cover the same physical time). A leapfrog coupling is proposed between the 'macro' and 'micro' components of the hybrid model and demonstrates potential for improved numerical accuracy over a standard simultaneous approach. A general algorithm for a coupled time step is presented. Three test cases are considered where the degree of time-scale separation naturally varies during the course of the simulation. First, the step response of a second-order system composed of two linearly-coupled ODEs. Second, a micro-jet actuator combining a kinetic treatment in a small flow region where rarefaction is important with a simple ODE enforcing mass conservation in a much larger spatial region. Finally, the transient start-up flow of a journal bearing with a cylindrical rarefied gas layer. Our new time-stepping method consistently demonstrates as good as or better performance than existing schemes. This superior overall performance is due to an adaptability inherent in the method, which allows the most-desirable aspects of existing schemes to be applied only in the appropriate conditions.

  1. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  2. Studies on steps affecting tritium residence time in solid blanket

    International Nuclear Information System (INIS)

    Tanaka, Satoru

    1987-01-01

    For the self sustaining of CTR fuel cycle, the effective tritium recovery from blankets is essential. This means that not only tritium breeding ratio must be larger than 1.0, but also high recovering speed is required for the short residence time of tritium in blankets. Short residence time means that the tritium inventory in blankets is small. In this paper, the tritium residence time and tritium inventory in a solid blanket are modeled by considering the steps constituting tritium release. Some of these tritium migration processes were experimentally evaluated. The tritium migration steps in a solid blanket using sintered breeding materials consist of diffusion in grains, desorption at grain edges, diffusion and permeation through grain boundaries, desorption at particle edges, diffusion and percolation through interconnected pores to purging stream, and convective mass transfer to stream. Corresponding to these steps, diffusive, soluble, adsorbed and trapped tritium inventories and the tritium in gas phase are conceivable. The code named TTT was made for calculating these tritium inventories and the residence time of tritium. An example of the results of calculation is shown. The blanket is REPUTER-1, which is the conceptual design of a commercial reversed field pinch fusion reactor studied at the University of Tokyo. The experimental studies on the migration steps of tritium are reported. (Kako, I.)

  3. Nonlinear stability and time step selection for the MPM method

    Science.gov (United States)

    Berzins, Martin

    2018-01-01

    The Material Point Method (MPM) has been developed from the Particle in Cell (PIC) method over the last 25 years and has proved its worth in solving many challenging problems involving large deformations. Nevertheless there are many open questions regarding the theoretical properties of MPM. For example in while Fourier methods, as applied to PIC may provide useful insight, the non-linear nature of MPM makes it necessary to use a full non-linear stability analysis to determine a stable time step for MPM. In order to begin to address this the stability analysis of Spigler and Vianello is adapted to MPM and used to derive a stable time step bound for a model problem. This bound is contrasted against traditional Speed of sound and CFL bounds and shown to be a realistic stability bound for a model problem.

  4. Diffeomorphic image registration with automatic time-step adjustment

    DEFF Research Database (Denmark)

    Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst

    2015-01-01

    In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required to r...... accuracy as a fixed time-step scheme however at a much less computational cost....

  5. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  6. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Science.gov (United States)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  7. Combating cancer one step at a time

    Directory of Open Access Journals (Sweden)

    R.N Sugitha Nadarajah

    2016-10-01

    widespread consequences, not only in a medical sense but also socially and economically,” says Dr. Abdel-Rahman. “We need to put in every effort to combat this fatal disease,” he adds.Tackling the spread of cancer and the increase in the number of cases reported every year is not without its challenges, he asserts. “I see the key challenges as the unequal availability of cancer treatments worldwide, the increasing cost of cancer treatment, and the increased median age of the population in many parts of the world, which carries with it a consequent increase in the risk of certain cancers,” he says. “We need to reassess the current pace and orientation of cancer research because, with time, cancer research is becoming industry-oriented rather than academia-oriented — which, in my view, could be very dangerous to the future of cancer research,” adds Dr. Abdel-Rahman. “Governments need to provide more research funding to improve the outcome of cancer patients,” he explains.His efforts and hard work have led to him receiving a number of distinguished awards, namely the UICC International Cancer Technology Transfer (ICRETT fellowship in 2014 at the Investigational New Drugs Unit in the European Institute of Oncology, Milan, Italy; EACR travel fellowship in 2015 at The Christie NHS Foundation Trust, Manchester, UK; and also several travel grants to Ireland, Switzerland, Belgium, Spain, and many other countries where he attended medical conferences. Dr. Abdel-Rahman is currently engaged in a project to establish a clinical/translational cancer research center at his institute, which seeks to incorporate various cancer-related disciplines in order to produce a real bench-to-bedside practice, hoping that it would “change research that may help shape the future of cancer therapy”.Dr. Abdel-Rahman is also an active founding member of the clinical research unit at his institute and is a representative to the prestigious European Organization for Research and

  8. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  9. Exponential Time Differencing With Runge- Kutta Time Stepping for ...

    African Journals Online (AJOL)

    Nafiisah

    stepping (ETDRK) for convectively dominated financial problems. For European ... We consider a financial market with a single asset with price S which follows the ...... The Mathematics of. Financial Derivatives. Cambridge University Press. New York. ZVAN, R., FORSYTH, P. A. & VETZAL, K. R. (1998). Robust numerical.

  10. A model for two-step ageing

    Indian Academy of Sciences (India)

    Unknown

    matrix are not considered. In the present work, a model is developed which takes into account the coherency strains between cluster and matrix and defines a new stability criterion, inclusive of strain energy term. Experiments were done on AA 7010 aluminium alloy by carrying out a two-step ageing treatment and the.

  11. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  12. Multi-time-step domain coupling method with energy control

    DEFF Research Database (Denmark)

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    A multi-time-step integration method is proposed for solving structural dynamics problems on multiple domains. The method generalizes earlier state-space integration algorithms by introducing displacement constraints via Lagrange multipliers, representing the time-integrated constraint forces over...

  13. Sharing Steps in the Workplace: Changing Privacy Concerns Over Time

    DEFF Research Database (Denmark)

    Jensen, Nanna Gorm; Shklovski, Irina

    2016-01-01

    study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers...

  14. Showering habits: time, steps, and products used after brain injury.

    Science.gov (United States)

    Reistetter, Timothy A; Chang, Pei-Fen J; Abreu, Beatriz C

    2009-01-01

    This pilot study describes the showering habits of people with brain injury (BI) compared with those of people without BI (WBI). The showering habits of 10 people with BI and 10 people WBI were measured and compared. A videotaped session recorded and documented the shower routine. The BI group spent longer time showering, used more steps, and used fewer products than the WBI group. A moderately significant relationship was found between time and age (r = .46, p = .041). Similarly, we found significant correlations between number of steps and number of products used (r = .64, p = .002) and between the number of products used and education (r = .47, p = .044). Results suggest that people with BI have showering habits that differ from those WBI. Correlations, regardless of group, showed that older people showered longer, and people with more education used more showering products.

  15. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    Science.gov (United States)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  16. Block factorization of step response model predictive control problems

    DEFF Research Database (Denmark)

    Kufoalor, D. K.M.; Frison, Gianluca; Imsland, L.

    2017-01-01

    implemented in the HPMPC framework, and the performance is evaluated through simulation studies. The results confirm that a computationally fast controller is achieved, compared to the traditional step response MPC scheme that relies on an explicit prediction formulation. Moreover, the tailored condensing......By introducing a stage-wise prediction formulation that enables the use of highly efficient quadratic programming (QP) solution methods, this paper expands the computational toolbox for solving step response MPC problems. We propose a novel MPC scheme that is able to incorporate step response data...... algorithm exhibits superior performance and produces solution times comparable to that achieved when using a condensing scheme for an equivalent (but much smaller) state-space model derived from first-principles. Implementation aspects necessary for high performance on embedded platforms are discussed...

  17. A model for two-step ageing

    Indian Academy of Sciences (India)

    Unknown

    While this is true in Al–Zn–Mg alloys, two-step ageing leads to inferior properties in Al–Mg–Si alloys. This controversial behaviour in different alloys can be ... Experiments were done on AA 7010 aluminium alloy by carrying out a two-step ageing treatment and the results fit the new stability criterion. Thus it is found that the ...

  18. Modelling Flow over Stepped Spillway with Varying Chute Geometry ...

    African Journals Online (AJOL)

    This study has modeled some characteristics of the flows over stepped spillway with varying chute geometry through a laboratory investigation. Using six physically built stepped spillway models, with each having six horizontal plain steps at 4cm constant height, 30 cm width and respective chute slope angles at 310, 320, ...

  19. STEP - Product Model Data Sharing and Exchange

    DEFF Research Database (Denmark)

    Kroszynski, Uri

    1998-01-01

    - Product Data Representation and Exchange", featuring at present some 30 released parts, and growing continuously. Many of the parts are Application Protocols (AP). This article presents an overview of STEP, based upon years of involvement in three ESPRIT projects, which contributed to the development...

  20. Stepwise hydrogeological modeling and groundwater flow analysis on site scale (Step 0 and Step 1)

    International Nuclear Information System (INIS)

    Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori

    2005-05-01

    One of the main goals of the Mizunami Underground Research Laboratory Project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. To achieve this goal, a variety of investigations, analysis, and evaluations have been conducted using an iterative approach. In this study, hydrogeological modeling and ground water flow analyses have been carried out using the data from surface-based investigations at Step 0 and Step 1, in order to synthesize the investigation results, to evaluate the uncertainty of the hydrogeological model, and to specify items for further investigation. The results of this study are summarized as follows: 1) As the investigation progresses Step 0 to Step 1, the understanding of groundwater flow was enhanced from Step 0 to Step 1, and the hydrogeological model could be revised, 2) The importance of faults as major groundwater flow pathways was demonstrated, 3) Geological and hydrogeological characteristics of faults with orientation of NNW and NE were shown to be especially significant. The main item specified for further investigations is summarized as follows: geological and hydrogeological characteristics of NNW and NE trending faults are important. (author)

  1. Modelling step-families: exploratory findings.

    Science.gov (United States)

    Bartlema, J

    1988-01-01

    "A combined macro-micro model is applied to a population similar to that forecast for 2035 in the Netherlands in order to simulate the effect on kinship networks of a mating system of serial monogamy. The importance of incorporating a parameter for the degree of concentration of childbearing over the female population is emphasized. The inputs to the model are vectors of fertility rates by age of mother, and by age of father, a matrix of first-marriage rates by age of both partners (used in the macro-analytical expressions), and two parameters H and S (used in the micro-simulation phase). The output is a data base of hypothetical individuals, whose records contain identification number, age, sex, and the identification numbers of their relatives." (SUMMARY IN FRE) excerpt

  2. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  3. Adaptive step goals and rewards: a longitudinal growth model of daily steps for a smartphone-based walking intervention.

    Science.gov (United States)

    Korinek, Elizabeth V; Phatak, Sayali S; Martin, Cesar A; Freigoun, Mohammad T; Rivera, Daniel E; Adams, Marc A; Klasnja, Pedja; Buman, Matthew P; Hekler, Eric B

    2018-02-01

    Adaptive interventions are an emerging class of behavioral interventions that allow for individualized tailoring of intervention components over time to a person's evolving needs. The purpose of this study was to evaluate an adaptive step goal + reward intervention, grounded in Social Cognitive Theory delivered via a smartphone application (Just Walk), using a mixed modeling approach. Participants (N = 20) were overweight (mean BMI = 33.8 ± 6.82 kg/m 2 ), sedentary adults (90% female) interested in participating in a 14-week walking intervention. All participants received a Fitbit Zip that automatically synced with Just Walk to track daily steps. Step goals and expected points were delivered through the app every morning and were designed using a pseudo-random multisine algorithm that was a function of each participant's median baseline steps. Self-report measures were also collected each morning and evening via daily surveys administered through the app. The linear mixed effects model showed that, on average, participants significantly increased their daily steps by 2650 (t = 8.25, p model with a quadratic time variable indicated an inflection point for increasing steps near the midpoint of the intervention and this effect was significant (t 2  = -247, t = -5.01, p goal + rewards intervention using a smartphone app appears to be a feasible approach for increasing walking behavior in overweight adults. App satisfaction was high and participants enjoyed receiving variable goals each day. Future mHealth studies should consider the use of adaptive step goals + rewards in conjunction with other intervention components for increasing physical activity.

  4. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  5. The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays

    Energy Technology Data Exchange (ETDEWEB)

    Gadella, M. [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain); Kuru, Ş. [Department of Physics, Faculty of Science, Ankara University, 06100 Ankara (Turkey); Negro, J., E-mail: jnegro@fta.uva.es [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain)

    2017-04-15

    We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays for the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.

  6. Variable time-stepping in the pathwise numerical solution of the chemical Langevin equation

    Science.gov (United States)

    Ilie, Silvana

    2012-12-01

    Stochastic modeling is essential for an accurate description of the biochemical network dynamics at the level of a single cell. Biochemically reacting systems often evolve on multiple time-scales, thus their stochastic mathematical models manifest stiffness. Stochastic models which, in addition, are stiff and computationally very challenging, therefore the need for developing effective and accurate numerical methods for approximating their solution. An important stochastic model of well-stirred biochemical systems is the chemical Langevin Equation. The chemical Langevin equation is a system of stochastic differential equation with multidimensional non-commutative noise. This model is valid in the regime of large molecular populations, far from the thermodynamic limit. In this paper, we propose a variable time-stepping strategy for the numerical solution of a general chemical Langevin equation, which applies for any level of randomness in the system. Our variable stepsize method allows arbitrary values of the time-step. Numerical results on several models arising in applications show significant improvement in accuracy and efficiency of the proposed adaptive scheme over the existing methods, the strategies based on halving/doubling of the stepsize and the fixed step-size ones.

  7. Advances in sequential data assimilation and numerical weather forecasting: An Ensemble Transform Kalman-Bucy Filter, a study on clustering in deterministic ensemble square root filters, and a test of a new time stepping scheme in an atmospheric model

    Science.gov (United States)

    Amezcua, Javier

    in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.

  8. Fitting three-level meta-analytic models in R: A step-by-step tutorial

    Directory of Open Access Journals (Sweden)

    Assink, Mark

    2016-10-01

    Full Text Available Applying a multilevel approach to meta-analysis is a strong method for dealing with dependency of effect sizes. However, this method is relatively unknown among researchers and, to date, has not been widely used in meta-analytic research. Therefore, the purpose of this tutorial was to show how a three-level random effects model can be applied to meta-analytic models in R using the rma.mv function of the metafor package. This application is illustrated by taking the reader through a step-by-step guide to the multilevel analyses comprising the steps of (1 organizing a data file; (2 setting up the R environment; (3 calculating an overall effect; (4 examining heterogeneity of within-study variance and between-study variance; (5 performing categorical and continuous moderator analyses; and (6 examining a multiple moderator model. By example, the authors demonstrate how the multilevel approach can be applied to meta-analytically examining the association between mental health disorders of juveniles and juvenile offender recidivism. In our opinion, the rma.mv function of the metafor package provides an easy and flexible way of applying a multi-level structure to meta-analytic models in R. Further, the multilevel meta-analytic models can be easily extended so that the potential moderating influence of variables can be examined.

  9. Development of a real time activity monitoring Android application utilizing SmartStep.

    Science.gov (United States)

    Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward

    2016-08-01

    Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.

  10. Extension of a spectral time-stepping domain decomposition method for dispersive and dissipative wave propagation.

    Science.gov (United States)

    Botts, Jonathan; Savioja, Lauri

    2015-04-01

    For time-domain modeling based on the acoustic wave equation, spectral methods have recently demonstrated promise. This letter presents an extension of a spectral domain decomposition approach, previously used to solve the lossless linear wave equation, which accommodates frequency-dependent atmospheric attenuation and assignment of arbitrary dispersion relations. Frequency-dependence is straightforward to assign when time-stepping is done in the spectral domain, so combined losses from molecular relaxation, thermal conductivity, and viscosity can be approximated with little extra computation or storage. A mode update free from numerical dispersion is derived, and the model is confirmed with a numerical experiment.

  11. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  12. Late time sky as a probe of steps and oscillations in primordial Universe

    Science.gov (United States)

    Ansari Fard, Mohammad; Baghram, Shant

    2018-01-01

    The standard model of cosmology with nearly Gaussian, isotropic, scale invariant and adiabatic initial conditions describes the cosmological observations well. However, the study of any deviation from the mentioned conditions will open up a new horizon to the physics of early universe. In this work, we study the effect of the oscillatory and step-like features in potentials of inflationary models in late time large scale structure observations. Mainly we study the matter power spectrum, number density of the structures, dark matter halo bias and specifically CMB lensing. We show that the oscillatory models can introduce some degeneracy with late time effects on BAO scale. We also conclude that high frequency oscillatory models which are favored by Planck data do not have significant effect on the non linear structure formation. Finally we show that inflationary models with step functions which deviates from the standard model in small scales l <= 1 Mpc can be constrained by future experiments via CMB lensing. We propose the idea that CMB lensing is a bias independent observation which can be used as a small scale physics probe due to distribution of the lenses in low redshifts. Meantime this model can alter the prediction of the cosmological model for the number density of small structures and can be used as a probable explanation for galactic scale crisis of ΛCDM.

  13. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    Science.gov (United States)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for

  14. Computational Fluid Dynamics Modeling of Flow over Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Raad Hoobi Irzooki

    2017-12-01

    Full Text Available In present paper, the computational fluid dynamics (CFD - program Flow-3D was used toanalyze and study the characteristics of flow energy dissipation over stepped spillways. Threedifferent spillway heights ( (15, 20 and 25cm were used. For each one of these models, threenumbers of steps (N (5, 10 and 25 and three spillway slopes (S (0.5, 1 and 1.25 were used.Eight different discharges ranging (600-8500cm³/s were passed over each one of these models,therefore the total runs of this study are 216. The energy dissipation over these models and thepressure distribution on the horizontal and vertical step faces over some models were studied. Forverification purpose of the (CFD program, the experimental work was conducted on four models ofstepped spillway and five different discharges were passed over each model. The magnitude ofdissipated energy on models was compared with results of numerical program under sameconditions. The comparison showed good agreement between them with standard percentage errorranging between (-2.01 - 11.13%. Thus, the program Flow-3D is a reasonable numerical programwhich can be used in this study.Results showed that the energy dissipation increases with increased spillway height anddecreased number of steps and spillway slope. Also, the energy dissipation decreases withincreasing the flow rate. An empirical equation for measuring the energy dissipation was derivedusing the dimensional analysis. The coefficient of determination of this equation ( equals 0.766.

  15. Viral DNA Packaging: One Step at a Time

    Science.gov (United States)

    Bustamante, Carlos; Moffitt, Jeffrey R.

    During its life-cycle the bacteriophage φ29 actively packages its dsDNA genome into a proteinacious capsid, compressing its genome to near crystalline densities against large electrostatic, elastic, and entropic forces. This remarkable process is accomplished by a nano-scale, molecular DNA pump - a complex assembly of three protein and nucleic acid rings which utilizes the free energy released in ATP hydrolysis to perform the mechanical work necessary to overcome these large energetic barriers. We have developed a single molecule optical tweezers assay which has allowed us to probe the detailed mechanism of this packaging motor. By following the rate of packaging of a single bacteriophage as the capsid is filled with genome and as a function of optically applied load, we find that the compression of the genome results in the build-up of an internal force, on the order of ˜ 55 pN, due to the compressed genome. The ability to work against such large forces makes the packaging motor one of the strongest known molecular motors. By titrating the concentration of ATP, ADP, and inorganic phosphate at different opposing load, we are able to determine features of the mechanochemistry of this motor - the coupling between the mechanical and chemical cycles. We find that force is generated not upon binding of ATP, but rather upon release of hydrolysis products. Finally, by improving the resolution of the optical tweezers assay, we are able to observe the discrete increments of DNA encapsidated each cycle of the packaging motor. We find that DNA is packaged in 10-bp increments preceded by the binding of multiple ATPs. The application of large external forces slows the packaging rate of the motor, revealing that the 10-bp steps are actually composed of four 2.5-bp steps which occur in rapid succession. These data show that the individual subunits of the pentameric ring-ATPase at the core of the packaging motor are highly coordinated, with the binding of ATP and the

  16. Fractional time stepping for unsteady engineering calculations on parallel computer systems

    Science.gov (United States)

    Molev, Sergey; Podaruev, Vladimir; Troshin, Alexey

    2017-11-01

    The tool for explicit scheme acceleration is described. Its essence is reducing arithmetic operations. Cells of the mesh are scattered by groups named levels. Each level has own time step. Coordination of levels is carried out. The method may be useful for great time scale scattering problems of aerodynamics. Reasons that produce deterioration of unsteady process modelling are revealed. Resolutions that correct the troubles are proposed. Example that demonstrates troubles rising conditions and successful abolition of them is presented. Limit of producing acceleration is denoted. Means that favor effective parallel computing with method are discussed.

  17. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  18. A Two Step Face Alignment Approach Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Ying Cui

    2012-10-01

    Full Text Available Although face alignment using the Active Appearance Model (AAM is relatively stable, it is known to be sensitive to initial values and not robust under inconstant circumstances. In order to strengthen the ability of AAM performance for face alignment, a two step approach for face alignment combining AAM and Active Shape Model (ASM is proposed. In the first step, AAM is used to locate the inner landmarks of the face. In the second step, the extended ASM is used to locate the outer landmarks of the face under the constraint of the estimated inner landmarks by AAM. The two kinds of landmarks are then combined together to form the whole facial landmarks. The proposed approach is compared with the basic AAM and the progressive AAM methods. Experimental results show that the proposed approach gives a much more effective performance.

  19. A semi-Lagrangian method for DNS with large time-stepping

    Science.gov (United States)

    Xiu, Dongbin; Karniadakis, George

    2000-11-01

    An efficient time-step discretization based on semi-Lagrangian methods, often used in metereology, is proposed for direct numerical simulations. It is unconditionally stable and retains high-order accuracy comparable to Eulerian schemes. The structure of the total error is analyzed in detail, and shows a non-monotonic trend with the size of the time-step. Numerical experiments for a variety of flows shows that stable and accurate results are obtained with time steps more than fifty times the CFL-bound time step used in current semi-implicit DNS.

  20. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  1. Problem Resolution through Electronic Mail: A Five-Step Model.

    Science.gov (United States)

    Grandgenett, Neal; Grandgenett, Don

    2001-01-01

    Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…

  2. New adaptive time step symplectic integrator: an application to the elliptic restricted three-body problem

    Science.gov (United States)

    Ni, Xiao-Ting; Wu, Xin

    2014-10-01

    The time-transformed leapfrog scheme of Mikkola & Aarseth was specifically designed for a second-order differential equation with two individually separable forms of positions and velocities. It can have good numerical accuracy for extremely close two-body encounters in gravitating few-body systems with large mass ratios, but the non-time-transformed one does not work well. Following this idea, we develop a new explicit symplectic integrator with an adaptive time step that can be applied to a time-dependent Hamiltonian. Our method relies on a time step function having two distinct but equivalent forms and on the inclusion of two pairs of new canonical conjugate variables in the extended phase space. In addition, the Hamiltonian must be modified to be a new time-transformed Hamiltonian with three integrable parts. When this method is applied to the elliptic restricted three-body problem, its numerical precision is explicitly higher by several orders of magnitude than the nonadaptive one's, and its numerical stability is also better. In particular, it can eliminate the overestimation of Lyapunov exponents and suppress the spurious rapid growth of fast Lyapunov indicators for high-eccentricity orbits of a massless third body. The present technique will be useful for conservative systems including N-body problems in the Jacobian coordinates in the the field of solar system dynamics, and nonconservative systems such as a time-dependent barred galaxy model in a rotating coordinate system.

  3. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  4. A conservative finite volume scheme with time-accurate local time stepping for scalar transport on unstructured grids

    Science.gov (United States)

    Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto

    2015-12-01

    In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to

  5. Step-indexed Kripke models over recursive worlds

    DEFF Research Database (Denmark)

    Birkedal, Lars; Reus, Bernhard; Schwinghammer, Jan

    2011-01-01

    worlds that are recursively defined in a category of metric spaces. In this paper, we broaden the scope of this technique from the original domain-theoretic setting to an elementary, operational one based on step indexing. The resulting method is widely applicable and leads to simple, succinct models...... of complicated language features, as we demonstrate in our semantics of Chargu´eraud and Pottier’s type-and-capability system for an ML-like higher-order language. Moreover, the method provides a high-level understanding of the essence of recent approaches based on step indexing....

  6. Patients with Chronic Obstructive Pulmonary Disease Walk with Altered Step Time and Step Width Variability as Compared with Healthy Control Subjects.

    Science.gov (United States)

    Yentes, Jennifer M; Rennard, Stephen I; Schmid, Kendra K; Blanke, Daniel; Stergiou, Nicholas

    2017-06-01

    Compared with control subjects, patients with chronic obstructive pulmonary disease (COPD) have an increased incidence of falls and demonstrate balance deficits and alterations in mediolateral trunk acceleration while walking. Measures of gait variability have been implicated as indicators of fall risk, fear of falling, and future falls. To investigate whether alterations in gait variability are found in patients with COPD as compared with healthy control subjects. Twenty patients with COPD (16 males; mean age, 63.6 ± 9.7 yr; FEV 1 /FVC, 0.52 ± 0.12) and 20 control subjects (9 males; mean age, 62.5 ± 8.2 yr) walked for 3 minutes on a treadmill while their gait was recorded. The amount (SD and coefficient of variation) and structure of variability (sample entropy, a measure of regularity) were quantified for step length, time, and width at three walking speeds (self-selected and ±20% of self-selected speed). Generalized linear mixed models were used to compare dependent variables. Patients with COPD demonstrated increased mean and SD step time across all speed conditions as compared with control subjects. They also walked with a narrower step width that increased with increasing speed, whereas the healthy control subjects walked with a wider step width that decreased as speed increased. Further, patients with COPD demonstrated less variability in step width, with decreased SD, compared with control subjects at all three speed conditions. No differences in regularity of gait patterns were found between groups. Patients with COPD walk with increased duration of time between steps, and this timing is more variable than that of control subjects. They also walk with a narrower step width in which the variability of the step widths from step to step is decreased. Changes in these parameters have been related to increased risk of falling in aging research. This provides a mechanism that could explain the increased prevalence of falls in patients with COPD.

  7. Choice stepping response and transfer times: effects of age, fall risk, and secondary tasks.

    Science.gov (United States)

    St George, Rebecca J; Fitzpatrick, Richard C; Rogers, Mark W; Lord, Stephen R

    2007-05-01

    Deterioration with age of physiological components of balance control increases fall risk. Avoiding a fall can also require higher level cognitive processing to select correct motor and stepping responses. Here we investigate how a competing cognitive task and an obstacle to stepping affect the initiation and execution phases of choice stepping reaction times in young and older people. Three groups were studied: young persons (YOUNG: 23-40 years, n = 20), older persons with a low risk of falls (OLR: 75-86 years, n = 18), and older persons with a high risk of falls (OHR: 78-88 years, n = 22). Four conditions were examined: choice stepping, choice stepping + obstacle, choice stepping + working memory task, and choice stepping + working memory task + obstacle. Step response and transfer times were measured for each condition, in addition to hesitant stepping, contacts with the obstacle and errors made in the memory test. Older participant groups had significantly longer response and transfer times than the young group had, and the OHR group had significantly longer response and transfer times than the OLR group had. There was a significant Group x Secondary task interaction for response time (F(2,215) = 12.6, p age and fall risk. Compared with young people, older people, and more so those at risk of falling, have an impaired ability to initiate and execute quick, accurate voluntary steps, particularly in situations where attention is divided.

  8. forecasting with nonlinear time series model: a monte-carlo

    African Journals Online (AJOL)

    PUBLICATIONS1

    erated recursively up to any step greater than one. For nonlinear time series model, point forecast for step one can be done easily like in the linear case but forecast for a step greater than or equal to ..... London. Franses, P. H. (1998). Time series models for business and Economic forecasting, Cam- bridge University press.

  9. Step Flow Model of Radial Growth and Shape Evolution of Semiconductor Nanowires

    Science.gov (United States)

    Filimonov, S. N.; Hervieu, Yu. Yu.

    2016-12-01

    A model of radial growth of vertically aligned nanowires (NW) via formation and propagation of monoatomic steps at nanowire sidewalls is developed. The model allows to describe self-consistently the step dynamics and the axial growth of the NW. It is shown that formation of NWs with an abrupt change of wire diameter and a non-tapered section at the top might be explained by the bunching of sidewall steps due to the presence of a strong sink for adatoms at the NW top. The Ehrlich-Schwoebel barrier for the attachment of adatoms to the descending step favors the step bunching at the beginning of the radial growth and promotes the decay of the bunch at a later time of the NW growth.

  10. Modified two-step potential model: Heavy mesons | Sharma | JASSA ...

    African Journals Online (AJOL)

    Modified two-step potential model: Heavy mesons. L K Sharma, P K Jain, V R Mundembe. http://dx.doi.org/10.4314/jassa.v4i2.16898 · AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL · AJOL's Partners · Terms and Conditions of Use · Contact AJOL ...

  11. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  12. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    Science.gov (United States)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  13. Real-time monitoring and control of the load phase of a protein A capture step.

    Science.gov (United States)

    Rüdt, Matthias; Brestrich, Nina; Rolinger, Laura; Hubbuch, Jürgen

    2017-02-01

    The load phase in preparative Protein A capture steps is commonly not controlled in real-time. The load volume is generally based on an offline quantification of the monoclonal antibody (mAb) prior to loading and on a conservative column capacity determined by resin-life time studies. While this results in a reduced productivity in batch mode, the bottleneck of suitable real-time analytics has to be overcome in order to enable continuous mAb purification. In this study, Partial Least Squares Regression (PLS) modeling on UV/Vis absorption spectra was applied to quantify mAb in the effluent of a Protein A capture step during the load phase. A PLS model based on several breakthrough curves with variable mAb titers in the HCCF was successfully calibrated. The PLS model predicted the mAb concentrations in the effluent of a validation experiment with a root mean square error (RMSE) of 0.06 mg/mL. The information was applied to automatically terminate the load phase, when a product breakthrough of 1.5 mg/mL was reached. In a second part of the study, the sensitivity of the method was further increased by only considering small mAb concentrations in the calibration and by subtracting an impurity background signal. The resulting PLS model exhibited a RMSE of prediction of 0.01 mg/mL and was successfully applied to terminate the load phase, when a product breakthrough of 0.15 mg/mL was achieved. The proposed method has hence potential for the real-time monitoring and control of capture steps at large scale production. This might enhance the resin capacity utilization, eliminate time-consuming offline analytics, and contribute to the realization of continuous processing. Biotechnol. Bioeng. 2017;114: 368-373. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc.

  14. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  15. Numerical solution of unsteady generalized Newtonian and Oldroyd-B fluids flow by dual time-stepping method

    Science.gov (United States)

    Keslerová, R.; Kozel, K.

    2014-03-01

    This work deals with the numerical solution of viscous and viscoelastic fluids flow. The governing system of equations is based on the system of balance laws for mass and momentum for incompressible laminar fluids. Different models for the stress tensor are considered. For viscous fluids flow Newtonian model is used. For the describing of the behaviour of the mixture of viscous and viscoelastic fluids Oldroyd-B model is used. Numerical solution of the described models is based on cell-centered finite volume method in conjunction with artificial compressibility method. For time integration an explicit multistage Runge-Kutta scheme is used. In the case of unsteady computation dual-time stepping method is considered. The principle of dual-time stepping method is following. The artificial time is introduced and the artificial compressibility method in the artificial time is applied.

  16. Quantummechanical multi-step direct models for nuclear data applications

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-10-01

    Various multi-step direct models have been derived and compared on a theoretical level. Subsequently, these models have been implemented in the computer code system KAPSIES, enabling a consistent comparison on the basis of the same set of nuclear parameters and same set of numerical techniques. Continuum cross sections in the energy region between 10 and several hundreds of MeV have successfully been analysed. Both angular distributions and energy spectra can be predicted in an essentially parameter-free manner. It is demonstrated that the quantum-mechanical MSD models (in particular the FKK model) give an improved prediction of pre-equilibrium angular distributions as compared to the experiment-based systematics of Kalbach. This makes KAPSIES a reliable tool for nuclear data applications in the afore-mentioned energy region. (author). 10 refs., 2 figs

  17. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    Science.gov (United States)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  18. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Science.gov (United States)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  19. Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET

    Directory of Open Access Journals (Sweden)

    B. Ghahraman

    2016-02-01

    Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0

  20. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    2013-01-01

    with situations where the ensemble average of the force deviates considerably from the force calculated at the average position of the trajectories of the ensemble. We identify the general trends for the applicability of the semiclassical model in terms of intensity, ellipticity, and wavelength of the laser pulse......By comparison with the solution of the time-dependent Schrödinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  1. Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism

    International Nuclear Information System (INIS)

    Stolterfoht, N.

    1993-01-01

    The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)

  2. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    Energy Technology Data Exchange (ETDEWEB)

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    2014-09-01

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.

  3. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  4. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  5. Comparison of sum-of-hourly and daily time step standardized ASCE Penman-Monteith reference evapotranspiration

    Science.gov (United States)

    Djaman, Koffi; Irmak, Suat; Sall, Mamadou; Sow, Abdoulaye; Kabenge, Isa

    2017-10-01

    The objective of this study was to quantify differences associated with using 24-h time step reference evapotranspiration (ETo), as compared with the sum of hourly ETo computations with the standardized ASCE Penman-Monteith (ASCE-PM) model for semi-arid dry conditions at Fanaye and Ndiaye (Senegal) and semiarid humid conditions at Sapu (The Gambia) and Kankan (Guinea). The results showed that there was good agreement between the sum of hourly ETo and daily time step ETo at all four locations. The daily time step overestimated the daily ETo relative to the sum of hourly ETo by 1.3 to 8% for the whole study periods. However, there is location and monthly dependence of the magnitude of ETo values and the ratio of the ETo values estimated by both methods. Sum of hourly ETo tends to give higher ETo during winter time at Fanaye and Sapu, while the daily ETo was higher from March to November at the same weather stations. At Ndiaye and Kankan, daily time step estimates of ETo were high during the year. The simple linear regression slopes between the sum of 24-h ETo and the daily time step ETo at all weather stations varied from 1.02 to 1.08 with high coefficient of determination (R 2 ≥ 0.87). Application of the hourly ETo estimation method might help on accurate ETo estimation to meet irrigation requirement under precision agriculture.

  6. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  7. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  8. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  9. Electric and hybrid electric vehicle study utilizing a time-stepping simulation

    Science.gov (United States)

    Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.

    1992-01-01

    The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.

  10. A Four-Step Model for Teaching Selection Interviewing Skills

    Science.gov (United States)

    Kleiman, Lawrence S.; Benek-Rivera, Joan

    2010-01-01

    The topic of selection interviewing lends itself well to experience-based teaching methods. Instructors often teach this topic by using a two-step process. The first step consists of lecturing students on the basic principles of effective interviewing. During the second step, students apply these principles by role-playing mock interviews with…

  11. Branching Patterns and Stepped Leaders in an Electric-Circuit Model for Creeping Discharge

    Science.gov (United States)

    Hidetsugu Sakaguchi,; Sahim M. Kourkouss,

    2010-06-01

    We construct a two-dimensional electric circuit model for creeping discharge. Two types of discharge, surface corona and surface leader, are modeled by a two-step function of conductance. Branched patterns of surface leaders surrounded by the surface corona appear in numerical simulation. The fractal dimension of branched discharge patterns is calculated by changing voltage and capacitance. We find that surface leaders often grow stepwise in time, as is observed in lightning leaders of thunder.

  12. Real-Time Step Length Control Method for a Biped Robot

    Science.gov (United States)

    Aiko, Takahiro; Ohnishi, Kouhei

    In this paper, the real-time step length control method for a biped robot is proposed. In human environment, it is necessary for a biped robot to change its gait on real-time since it is required to walk according to situations. By use of the proposed method, the center-of-gravity trajectory and swing leg trajectory were generated on real-time with that its command value is the step length. For generating the center-of-gravity trajectory, we develop Linear Inverted Pendulum Mode and additionally consider walking stability by ZMP. In order to demonstrate the proposed method, the simulation and experiment of a biped walk is performed.

  13. Women's steps of change and entry into drug abuse treatment. A multidimensional stages of change model.

    Science.gov (United States)

    Brown, V B; Melchior, L A; Panter, A T; Slaughter, R; Huba, G J

    2000-04-01

    The Transtheoretical, or Stages of Change Model, has been applied to the investigation of help-seeking related to a number of addictive behaviors. Overall, the model has shown to be very important in understanding the process of help-seeking. However, substance abuse rarely exists in isolation from other health, mental health, and social problems. The present work extends the original Stages of Change Model by proposing "Steps of Change" as they relate to entry into substance abuse treatment programs for women. Readiness to make life changes in four domains-domestic violence, HIV sexual risk behavior, substance abuse, and mental health-is examined in relation to entry into four substance abuse treatment modalities (12-step, detoxification, outpatient, and residential). The Steps of Change Model hypothesizes that help-seeking behavior of substance-abusing women may reflect a hierarchy of readiness based on the immediacy, or time urgency, of their treatment issues. For example, women in battering relationships may be ready to make changes to reduce their exposure to violence before admitting readiness to seek substance abuse treatment. The Steps of Change Model was examined in a sample of 451 women contacted through a substance abuse treatment-readiness program in Los Angeles, California. A series of logistic regression analyses predict entry into four separate treatment modalities that vary. Results suggest a multidimensional Stages of Change Model that may extend to other populations and to other types of help-seeking behaviors.

  14. Modified Step Variational Iteration Method for Solving Fractional Biochemical Reaction Model

    Directory of Open Access Journals (Sweden)

    R. Yulita Molliq

    2011-01-01

    Full Text Available A new method called the modification of step variational iteration method (MoSVIM is introduced and used to solve the fractional biochemical reaction model. The MoSVIM uses general Lagrange multipliers for construction of the correction functional for the problems, and it runs by step approach, which is to divide the interval into subintervals with time step, and the solutions are obtained at each subinterval as well adopting a nonzero auxiliary parameter ℏ to control the convergence region of series' solutions. The MoSVIM yields an analytical solution of a rapidly convergent infinite power series with easily computable terms and produces a good approximate solution on enlarged intervals for solving the fractional biochemical reaction model. The accuracy of the results obtained is in a excellent agreement with the Adam Bashforth Moulton method (ABMM.

  15. Two-Step Estimation of Models Between Latent Classes and External Variables.

    Science.gov (United States)

    Bakk, Zsuzsa; Kuha, Jouni

    2017-11-17

    We consider models which combine latent class measurement models for categorical latent variables with structural regression models for the relationships between the latent classes and observed explanatory and response variables. We propose a two-step method of estimating such models. In its first step, the measurement model is estimated alone, and in the second step the parameters of this measurement model are held fixed when the structural model is estimated. Simulation studies and applied examples suggest that the two-step method is an attractive alternative to existing one-step and three-step methods. We derive estimated standard errors for the two-step estimates of the structural model which account for the uncertainty from both steps of the estimation, and show how the method can be implemented in existing software for latent variable modelling.

  16. Step-by-Step Model for the Study of the Apriori Algorithm for Predictive Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Grigore ROŞCA

    2015-06-01

    Full Text Available The goal of this paper was to develop an educational oriented application based on the Data Mining Apriori Algorithm which facilitates both the research and the study of data mining by graduate students. The application could be used to discover interesting patterns in the corpus of data and to measure the impact on the speed of execution as a function of problem constraints (value of support and confidence variables or size of the transactional data-base. The paper presents a brief overview of the Apriori Algorithm, aspects about the implementation of the algorithm using a step-by-step process, a discussion of the education-oriented user interface and the process of data mining of a test transactional data base. The impact of some constraints on the speed of the algorithm is also experimentally measured without a systematic review of different approaches to increase execution speed. Possible applications of the implementation, as well as its limits, are briefly reviewed.

  17. A permeation theory for single-file ion channels: one- and two-step models.

    Science.gov (United States)

    Nelson, Peter Hugo

    2011-04-28

    How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no

  18. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  19. Evaluating Bank Profitability in Ghana: A five step Du-Pont Model Approach

    Directory of Open Access Journals (Sweden)

    Baah Aye Kusi

    2015-09-01

    Full Text Available We investigate bank profitability in Ghana using periods before, during and after the globe financial crises with the five step du-pont model for the first time.We adapt the variable of the five step du-pont model to explain bank profitability with a panel data of twenty-five banks in Ghana from 2006 to 2012. To ensure meaningful generalization robust errors fixed and random effects models are used.Our empirical results suggests that bank operating activities (operating profit margin, bank efficiency (asset turnover, bank leverage (asset to equity and financing cost (interest burden  were positive and significant determinants of bank profitability (ROE during the period of study implying that bank in Ghana can boost return to equity holders through the above mentioned variables. We further report that the five step du-pont model better explains the total variation (94% in bank profitability in Ghana as compared to earlier findings suggesting that bank specific variables are keen in explaining ROE in banks in Ghana.We cited no empirical study that has employed five step du-pont model making our study unique and different from earlier studies as we assert that bank specific variables are core to explaining bank profitability.                

  20. Modelling urban travel times

    NARCIS (Netherlands)

    Zheng, F.

    2011-01-01

    Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain

  1. Shear stress with appropriate time-step and amplification enhances endothelial cell retention on vascular grafts.

    Science.gov (United States)

    Liu, Haifeng; Gong, Xianghui; Jing, Xiaohui; Ding, Xili; Yao, Yuan; Huang, Yan; Fan, Yubo

    2017-11-01

    Endothelial cells (ECs) are sensitive to changes in shear stress. The application of shear stress to ECs has been well documented to improve cell retention when placed into a haemodynamically active environment. However, the relationship between the time-step and amplification of shear stress on EC functions remains elusive. In the present study, human umbilical cord veins endothelial cells (HUVECs) were seeded on silk fibroin nanofibrous scaffolds and were preconditioned by shear stress at different time-steps and amplifications. It is shown that gradually increasing shear stress with appropriate time-steps and amplification could improve EC retention, yielding a complete endothelial-like monolayer both in vitro and in vivo. The mechanism of this improvement is mediated, at least in part, by an upregulation of integrin β1 and focal adhesion kinase (FAK) expression, which contributed to fibronectin (FN) assembly enhancement in ECs in response to the shear stress. A modest gradual increase in shear stress was essential to allow additional time for ECs to gradually acclimatize to the changing environment, with the goal of withstanding the physiological levels of shear stress. This study recognized that the time-steps and amplifications of shear stress could regulate EC tolerance to shear stress and the anti-thrombogenicity function of engineered vascular grafts via an extracellular cell matrix-specific, mechanosensitive signalling pathway and might prevent thrombus formation in vivo. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  3. Optimal order and time-step criterion for Aarseth-type N-body integrators

    Science.gov (United States)

    Makino, Junichiro

    1991-03-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed.

  4. Optimal order and time-step criterion for Aarseth-type N-body integrators

    Energy Technology Data Exchange (ETDEWEB)

    Makino, Junichiro (Tokyo Univ. (Japan))

    1991-03-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs.

  5. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  6. Statistical efficiency and optimal design for stepped cluster studies under linear mixed effects models.

    Science.gov (United States)

    Girling, Alan J; Hemming, Karla

    2016-06-15

    In stepped cluster designs the intervention is introduced into some (or all) clusters at different times and persists until the end of the study. Instances include traditional parallel cluster designs and the more recent stepped-wedge designs. We consider the precision offered by such designs under mixed-effects models with fixed time and random subject and cluster effects (including interactions with time), and explore the optimal choice of uptake times. The results apply both to cross-sectional studies where new subjects are observed at each time-point, and longitudinal studies with repeat observations on the same subjects. The efficiency of the design is expressed in terms of a 'cluster-mean correlation' which carries information about the dependency-structure of the data, and two design coefficients which reflect the pattern of uptake-times. In cross-sectional studies the cluster-mean correlation combines information about the cluster-size and the intra-cluster correlation coefficient. A formula is given for the 'design effect' in both cross-sectional and longitudinal studies. An algorithm for optimising the choice of uptake times is described and specific results obtained for the best balanced stepped designs. In large studies we show that the best design is a hybrid mixture of parallel and stepped-wedge components, with the proportion of stepped wedge clusters equal to the cluster-mean correlation. The impact of prior uncertainty in the cluster-mean correlation is considered by simulation. Some specific hybrid designs are proposed for consideration when the cluster-mean correlation cannot be reliably estimated, using a minimax principle to ensure acceptable performance across the whole range of unknown values. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  7. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    Science.gov (United States)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  8. Validation and comparison of shank and lumbar-worn IMUs for step time estimation.

    Science.gov (United States)

    Johnston, William; Patterson, Matthew; O'Mahony, Niamh; Caulfield, Brian

    2017-10-26

    Gait assessment is frequently used as an outcome measure to determine changes in an individual's mobility and disease processes. Inertial measurement units (IMUs) are quickly becoming commonplace in gait analysis. The purpose of this study was to determine and compare the validity of shank and lumbar IMU mounting locations in the estimation of temporal gait features. Thirty-seven adults performed 20 walking trials each over a gold standard force platform while wearing shank and lumbar-mounted IMUs. Data from the IMUs were used to estimate step times using previously published algorithms and were compared with those derived from the force platform. There was an excellent level of correlation between the force platform and shank (r=0.95) and lumbar-mounted (r=0.99) IMUs. Bland-Altman analysis demonstrated high levels of agreement between the IMU and the force platform step times. Confidence interval widths were 0.0782 s for the shank and 0.0367 s for the lumbar. Both IMU mounting locations provided accurate step time estimations, with the lumbar demonstrating a marginally superior level of agreement with the force platform. This validation indicates that the IMU system is capable of providing step time estimates within 2% of the gold standard force platform measurement.

  9. Theoretical intercomparison of multi-step direct reaction models and computational intercomparison of multi-step direct reaction models

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-08-01

    In recent years several statistical theories have been developed concerning multistep direct (MSD) nuclear reactions. In addition, dominant in applications is a whole class of semiclassical models that may be subsumed under the heading of 'generalized exciton models'. These are basically MSD-type extensions on top of compound-like concepts. In this report the relationship between their underlying statistical MSD-postulates is highlighted. A command framework is outlined that enables to generate the various MSD theories through assigning statistical properties to different parts of the nuclear Hamiltonian. Then it is shown that distinct forms of nuclear randomness are embodied in the mentioned theories. All these theories appear to be very similar at a qualitative level. In order to explain the high energy-tails and forward-peaked angular distribution typical for particles emitted in MSD reactions, it is imagined that the incident continuum particle stepwise looses its energy and direction in a sequence of collisions, thereby creating new particle-hole pairs in the target system. At each step emission may take place. The statistical aspect comes in because many continuum states are involved in the process. These are supposed to display chaotic behavior, the associated randomness assumption giving rise to important simplifications in the expression for MSD emission cross sections. This picture suggests that mentioned MSD models can be interpreted as a variant of essentially one and the same theory. However, this appears not to be the case. To show this usual MSD distinction within the composite reacting nucleus between the fast continuum particle and the residual interactions, the nucleons of the residual core are to be distinguished from those of the leading particle with the residual system. This distinction will turn out to be crucial to present analysis. 27 refs.; 5 figs.; 1 tab

  10. Elderly fallers enhance dynamic stability through anticipatory postural adjustments during a choice stepping reaction time

    Directory of Open Access Journals (Sweden)

    Romain Tisserand

    2016-11-01

    Full Text Available In the case of disequilibrium, the capacity to step quickly is critical to avoid falling for elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT, where elderly fallers (F take longer to step than elderly non-fallers (NF. However, reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA that elderly F develop in a stepping context and their consequences on the dynamic stability. 44 community-dwelling elderly subjects (20 F and 22 NF performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP; in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall.

  11. Elderly Fallers Enhance Dynamic Stability Through Anticipatory Postural Adjustments during a Choice Stepping Reaction Time

    Science.gov (United States)

    Tisserand, Romain; Robert, Thomas; Chabaud, Pascal; Bonnefoy, Marc; Chèze, Laurence

    2016-01-01

    In the case of disequilibrium, the capacity to step quickly is critical to avoid falling in elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT), where elderly fallers (F) take longer to step than elderly non-fallers (NF). However, the reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA) that elderly F develop in a stepping context and their consequences on the dynamic stability. Forty-four community-dwelling elderly subjects (20 F and 24 NF) performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP); in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS) at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall. PMID:27965561

  12. Eliminating time dispersion from seismic wave modeling

    Science.gov (United States)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  13. modelling flow over stepped spillway with varying chute geometry

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... A basic dimensional analysis of the flow over the chute of the stepped spillway, assuming that the dom- inant feature is the momentum exchange between the free stream and the cavity flow within the steps of the spillway [2,3,4], is as presented in equation (1): f1(Ho,Hn,Um,hw,Li,Ls,Ks, tg(θ),µw,ρw,g)=0. (1).

  14. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  15. Measuring border delay and crossing times at the US-Mexico border : part II. Step-by-step guidelines for implementing a radio frequency identification (RFID) system to measure border crossing and wait times.

    Science.gov (United States)

    2012-06-01

    The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...

  16. Counterrotating prop-fan simulations which feature a relative-motion multiblock grid decomposition enabling arbitrary time-steps

    Science.gov (United States)

    Janus, J. Mark; Whitfield, David L.

    1990-01-01

    Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.

  17. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-01

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings

  18. Efficient time-stepping-free time integration of the Maxwell equations

    NARCIS (Netherlands)

    Bochev, Mikhail A.; Brushlinskii, K.V.; Gavreeva, M.S.; Zhukov, V.T.; Severin, A.V.; Cmykhova, N.A.

    Solution of the time dependent Maxwell equations is an important problem arising in many applications ranging from nanophotonics to geoscience and astronomy. The problem is far from trivial, and solutions typically exhibit complicated wave properties as well as damping behavior. Usually, special

  19. Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network

    International Nuclear Information System (INIS)

    Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei

    2008-01-01

    This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series

  20. Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    H. Vincent Poor

    2008-05-01

    Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.

  1. Empirical Modeling of Oxygen Uptake of Flow Over Stepped Chutes ...

    African Journals Online (AJOL)

    The present investigation evaluates the influence of three different step chute geometry when skimming flow was allowed over them with the aim of determining the aerated flow length which is a significant factor when developing empirical equations for estimating aeration efficiency of flow. Overall, forty experiments were ...

  2. Adaptive discrete-time controller design with neural network for hypersonic flight vehicle via back-stepping

    Science.gov (United States)

    Xu, Bin; Sun, Fuchun; Yang, Chenguang; Gao, Daoxiang; Ren, Jianxin

    2011-09-01

    In this article, the adaptive neural controller in discrete time is investigated for the longitudinal dynamics of a generic hypersonic flight vehicle. The dynamics are decomposed into the altitude subsystem and the velocity subsystem. The altitude subsystem is transformed into the strict-feedback form from which the discrete-time model is derived by the first-order Taylor expansion. The virtual control is designed with nominal feedback and neural network (NN) approximation via back-stepping. Meanwhile, one adaptive NN controller is designed for the velocity subsystem. To avoid the circular construction problem in the practical control, the design of coefficients adopts the upper bound instead of the nominal value. Under the proposed controller, the semiglobal uniform ultimate boundedness stability is guaranteed. The square and step responses are presented in the simulation studies to show the effectiveness of the proposed control approach.

  3. Assessment of radiopacity of restorative composite resins with various target distances and exposure times and a modified aluminum step wedge

    Energy Technology Data Exchange (ETDEWEB)

    Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)

    2012-09-15

    ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.

  4. Real-time modeling of heat distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  5. Narrowing the Expertise Gap for Predicting Intracranial Aneurysm Hemodynamics: Impact of Solver Numerics versus Mesh and Time-Step Resolution.

    Science.gov (United States)

    Khan, M O; Valen-Sendstad, K; Steinman, D A

    2015-07-01

    Recent high-resolution computational fluid dynamics studies have uncovered the presence of laminar flow instabilities and possible transitional or turbulent flow in some intracranial aneurysms. The purpose of this study was to elucidate requirements for computational fluid dynamics to detect these complex flows, and, in particular, to discriminate the impact of solver numerics versus mesh and time-step resolution. We focused on 3 MCA aneurysms, exemplifying highly unstable, mildly unstable, or stable flow phenotypes, respectively. For each, the number of mesh elements was varied by 320× and the number of time-steps by 25×. Computational fluid dynamics simulations were performed by using an optimized second-order, minimally dissipative solver, and a more typical first-order, stabilized solver. With the optimized solver and settings, qualitative differences in flow and wall shear stress patterns were negligible for models down to ∼800,000 tetrahedra and ∼5000 time-steps per cardiac cycle and could be solved within clinically acceptable timeframes. At the same model resolutions, however, the stabilized solver had poorer accuracy and completely suppressed flow instabilities for the 2 unstable flow cases. These findings were verified by using the popular commercial computational fluid dynamics solver, Fluent. Solver numerics must be considered at least as important as mesh and time-step resolution in determining the quality of aneurysm computational fluid dynamics simulations. Proper computational fluid dynamics verification studies, and not just superficial grid refinements, are therefore required to avoid overlooking potentially clinically and biologically relevant flow features. © 2015 by American Journal of Neuroradiology.

  6. A chaos detectable and time step-size adaptive numerical scheme for nonlinear dynamical systems

    Science.gov (United States)

    Chen, Yung-Wei; Liu, Chein-Shan; Chang, Jiang-Ren

    2007-02-01

    The first step in investigation the dynamics of a continuous time system described by ordinary differential equations is to integrate them to obtain trajectories. In this paper, we convert the group-preserving scheme (GPS) developed by Liu [International Journal of Non-Linear Mechanics 36 (2001) 1047-1068] to a time step-size adaptive scheme, x=x+hf(x,t), where x∈R is the system variables we are concerned with, and f(x,t)∈R is a time-varying vector field. The scheme has the form similar to the Euler scheme, x=x+Δtf(x,t), but our step-size h is adaptive automatically. Very interestingly, the ratio h/Δt, which we call the adaptive factor, can forecast the appearance of chaos if the considered dynamical system becomes chaotical. The numerical examples of the Duffing equation, the Lorenz equation and the Rossler equation, which may exhibit chaotic behaviors under certain parameters values, are used to demonstrate these phenomena. Two other non-chaotic examples are included to compare the performance of the GPS and the adaptive one.

  7. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  8. DEFORMATION DEPENDENT TUL MULTI-STEP DIRECT MODEL

    International Nuclear Information System (INIS)

    WIENKE, H.; CAPOTE, R.; HERMAN, M.; SIN, M.

    2007-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended in order to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, ''deformed'' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the ''spherical'' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations

  9. Deformation dependent TUL multi-step direct model

    International Nuclear Information System (INIS)

    Wienke, H.; Capote, R.; Herman, M.; Sin, M.

    2008-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, 'deformed' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the 'spherical' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations. (authors)

  10. Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity

    Science.gov (United States)

    Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng

    2014-04-01

    A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.

  11. ADDING A NEW STEP WITH SPATIAL AUTOCORRELATION TO IMPROVE THE FOUR-STEP TRAVEL DEMAND MODEL WITH FEEDBACK FOR A DEVELOPING CITY

    Directory of Open Access Journals (Sweden)

    Xuesong FENG, Ph.D Candidate

    2009-01-01

    Full Text Available It is expected that improvement of transport networks could give rise to the change of spatial distributions of population-related factors and car ownership, which are expected to further influence travel demand. To properly reflect such an interdependence mechanism, an aggregate multinomial logit (A-MNL model was firstly applied to represent the spatial distributions of these exogenous variables of the travel demand model by reflecting the influence of transport networks. Next, the spatial autocorrelation analysis is introduced into the log-transformed A-MNL model (called SPA-MNL model. Thereafter, the SPA-MNL model is integrated into the four-step travel demand model with feedback (called 4-STEP model. As a result, an integrated travel demand model is newly developed and named as the SPA-STEP model. Using person trip data collected in Beijing, the performance of the SPA-STEP model is empirically compared with the 4-STEP model. It was proven that the SPA-STEP model is superior to the 4-STEP model in accuracy; most of the estimated parameters showed statistical differences in values. Moreover, though the results of the simulations to the same set of assumed scenarios by the 4-STEP model and the SPA-STEP model consistently suggested the same sustainable path for the future development of Beijing, it was found that the environmental sustainability and the traffic congestion for these scenarios were generally overestimated by the 4-STEP model compared with the corresponding analyses by the SPA-STEP model. Such differences were clearly generated by the introduction of the new modeling step with spatial autocorrelation.

  12. STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python

    OpenAIRE

    Wils, Stefan; Schutter, Erik De

    2009-01-01

    We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and b...

  13. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  14. Rubbing time and bonding performance of one-step adhesives to primary enamel and dentin

    Science.gov (United States)

    Botelho, Maria Paula Jacobucci; Isolan, Cristina Pereira; Schwantz, Júlia Kaster; Lopes, Murilo Baena; de Moraes, Rafael Ratto

    2017-01-01

    Abstract Objectives: This study investigated whether increasing the concentration of acidic monomers in one-step adhesives would allow reducing their application time without interfering with the bonding ability to primary enamel and dentin. Material and methods: Experimental one-step self-etch adhesives were formulated with 5 wt% (AD5), 20 wt% (AD20), or 35 wt% (AD35) acidic monomer. The adhesives were applied using rubbing motion for 5, 10, or 20 s. Bond strengths to primary enamel and dentin were tested under shear stress. A commercial etch-and-rinse adhesive (Single Bond 2; 3M ESPE) served as reference. Scanning electron microscopy was used to observe the morphology of bonded interfaces. Data were analysed at pself-etch adhesives. The adhesive layer was thicker and more irregular for the etch-and-rinse material, with no appreciable differences among the self-etch systems. Conclusion: Overall, increasing the acidic monomer concentration only led to an increase in bond strength to enamel when the rubbing time was at least 10 s. In dentin, despite the increase in bond strength with longer rubbing times, the results favoured the experimental adhesives compared to the conventional adhesive. Reduced rubbing time of self-etch adhesives should be avoided in the clinical setup. PMID:29069150

  15. A one-step, real-time PCR assay for rapid detection of rhinovirus.

    Science.gov (United States)

    Do, Duc H; Laus, Stella; Leber, Amy; Marcon, Mario J; Jordan, Jeanne A; Martin, Judith M; Wadowsky, Robert M

    2010-01-01

    One-step, real-time PCR assays for rhinovirus have been developed for a limited number of PCR amplification platforms and chemistries, and some exhibit cross-reactivity with genetically similar enteroviruses. We developed a one-step, real-time PCR assay for rhinovirus by using a sequence detection system (Applied Biosystems; Foster City, CA). The primers were designed to amplify a 120-base target in the noncoding region of picornavirus RNA, and a TaqMan (Applied Biosystems) degenerate probe was designed for the specific detection of rhinovirus amplicons. The PCR assay had no cross-reactivity with a panel of 76 nontarget nucleic acids, which included RNAs from 43 enterovirus strains. Excellent lower limits of detection relative to viral culture were observed for the PCR assay by using 38 of 40 rhinovirus reference strains representing different serotypes, which could reproducibly detect rhinovirus serotype 2 in viral transport medium containing 10 to 10,000 TCID(50) (50% tissue culture infectious dose endpoint) units/ml of the virus. However, for rhinovirus serotypes 59 and 69, the PCR assay was less sensitive than culture. Testing of 48 clinical specimens from children with cold-like illnesses for rhinovirus by the PCR and culture assays yielded detection rates of 16.7% and 6.3%, respectively. For a batch of 10 specimens, the entire assay was completed in 4.5 hours. This real-time PCR assay enables detection of many rhinovirus serotypes with the Applied Biosystems reagent-instrument platform.

  16. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  17. Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential

    International Nuclear Information System (INIS)

    Zhang Ying; Liang Haozhao; Meng Jie

    2009-01-01

    The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.

  18. Solving the dirac equation with nonlocal potential by imaginary time step method

    International Nuclear Information System (INIS)

    Zhang Ying; Liang Haozhao; Meng Jie

    2009-01-01

    The imaginary time step (ITS) method is applied to solve the Dirac equation with the nonlocal potential in coordinate space by the ITS evolution for the corresponding Schroedinger-like equation for the upper component. It is demonstrated that the ITS evolution can be equivalently performed for the Schroedinger-like equation with or without localization. The latter algorithm is recommended in the application for the reason of simplicity and efficiency. The feasibility and reliability of this algorithm are also illustrated by taking the nucleus 16 O as an example, where the same results as the shooting method for the Dirac equation with localized effective potentials are obtained. (authors)

  19. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  20. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  1. Rubbing time and bonding performance of one-step adhesives to primary enamel and dentin

    Directory of Open Access Journals (Sweden)

    Maria Paula Jacobucci Botelho

    Full Text Available Abstract Objectives: This study investigated whether increasing the concentration of acidic monomers in one-step adhesives would allow reducing their application time without interfering with the bonding ability to primary enamel and dentin. Material and methods: Experimental one-step self-etch adhesives were formulated with 5 wt% (AD5, 20 wt% (AD20, or 35 wt% (AD35 acidic monomer. The adhesives were applied using rubbing motion for 5, 10, or 20 s. Bond strengths to primary enamel and dentin were tested under shear stress. A commercial etch-and-rinse adhesive (Single Bond 2; 3M ESPE served as reference. Scanning electron microscopy was used to observe the morphology of bonded interfaces. Data were analysed at p<0.05. Results: In enamel, AD35 had higher bond strength when rubbed for at least 10 s, while application for 5 s generated lower bond strength. In dentin, increased acidic monomer improved bonding only for 20 s rubbing time. The etch-and-rinse adhesive yielded higher bond strength to enamel and similar bonding to dentin as compared with the self-etch adhesives. The adhesive layer was thicker and more irregular for the etch-and-rinse material, with no appreciable differences among the self-etch systems. Conclusion: Overall, increasing the acidic monomer concentration only led to an increase in bond strength to enamel when the rubbing time was at least 10 s. In dentin, despite the increase in bond strength with longer rubbing times, the results favoured the experimental adhesives compared to the conventional adhesive. Reduced rubbing time of self-etch adhesives should be avoided in the clinical setup.

  2. Real-time holographic interferometry using photorefractive sillenite crystals with phase-stepping technique

    Science.gov (United States)

    Gesualdi, M. R. R.; Soga, D.; Muramatsu, M.

    2006-01-01

    This work presents a holographic interferometer that uses the photorefractive sillenite crystals in diffusive regimen whose configuration exhibits diffraction anisotropy for real-time holographic interferometry. The writing-reading process of holographic interferogram was done in real-time, connected with an interferogram-analysis method that uses the phase-stepping technique for quantitative measurement of changes on an object. The holographic interferograms from the analyzed surface were captured and they were used to calculate the phase map with four-frame technique. The unwrapping process used was the cellular-automata technique. We obtained quantitative results for some applications: measurements of micro-rotation of surfaces, punctual micro-displacements on an aluminum plate, stress on a dog's jaw, among others; adding new promising applications possibilities for basic research, dentistry and technological areas.

  3. Time-dependent intranuclear cascade model

    International Nuclear Information System (INIS)

    Barashenkov, V.S.; Kostenko, B.F.; Zadorogny, A.M.

    1980-01-01

    An intranuclear cascade model with explicit consideration of the time coordinate in the Monte Carlo simulation of the development of a cascade particle shower has been considered. Calculations have been performed using a diffuse nuclear boundary without any step approximation of the density distribution. Changes in the properties of the target nucleus during the cascade development have been taken into account. The results of these calculations have been compared with experiment and with the data which had been obtained by means of a time-independent cascade model. The consideration of time improved agreement between experiment and theory particularly for high-energy shower particles; however, for low-energy cascade particles (with grey and black tracks in photoemulsion) a discrepancy remains at T >= 10 GeV. (orig.)

  4. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  5. An adaptive time step scheme for a system of stochastic differential equations with multiple multiplicative noise: chemical Langevin equation, a proof of concept.

    Science.gov (United States)

    Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2008-01-07

    Models involving stochastic differential equations (SDEs) play a prominent role in a wide range of applications where systems are not at the thermodynamic limit, for example, biological population dynamics. Therefore there is a need for numerical schemes that are capable of accurately and efficiently integrating systems of SDEs. In this work we introduce a variable size step algorithm and apply it to systems of stiff SDEs with multiple multiplicative noise. The algorithm is validated using a subclass of SDEs called chemical Langevin equations that appear in the description of dilute chemical kinetics models, with important applications mainly in biology. Three representative examples are used to test and report on the behavior of the proposed scheme. We demonstrate the advantages and disadvantages over fixed time step integration schemes of the proposed method, showing that the adaptive time step method is considerably more stable than fixed step methods with no excessive additional computational overhead.

  6. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location.

    Science.gov (United States)

    Bancroft, Matthew J; Day, Brian L

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.

  7. TMS field modelling-status and next steps

    DEFF Research Database (Denmark)

    Thielscher, Axel

    2013-01-01

    In the recent years, an increasing number of studies used geometrically accurate head models and finite element (FEM) or finite difference methods (FDM) to estimate the electric field induced by non-invasive neurostimulation techniques such as transcranial magnetic stimulation (TMS) or transcranial...... necessary.Focusing on motor cortex stimulation by TMS, our goal is to explore to which extent the field estimates based on advanced models correlate with the physiological stimulation effects. For example, we aim at testing whether interindividual differences in the field estimates are also reflected...... in differences in the MEP responses. This would indicate that the field calculations accurately capture the impact of individual macroanatomical features of the head and brain on the induced field distribution, in turn strongly supporting their plausibility.Our approach is based on the SimNIBS software pipeline...

  8. Step training in a rat model for complex aneurysmal vascular microsurgery

    Directory of Open Access Journals (Sweden)

    Martin Dan

    2015-12-01

    Full Text Available Introduction: Microsurgery training is a key step for the young neurosurgeons. Both in vascular and peripheral nerve pathology, microsurgical techniques are useful tools for the proper treatment. Many training models have been described, including ex vivo (chicken wings and in vivo (rat, rabbit ones. Complex microsurgery training include termino-terminal vessel anastomosis and nerve repair. The aim of this study was to describe a reproducible complex microsurgery training model in rats. Materials and methods: The experimental animals were Brown Norway male rats between 10-16 weeks (average 13 and weighing between 250-400g (average 320g. We performed n=10 rat hind limb replantations. The surgical steps and preoperative management are carefully described. We evaluated the vascular patency by clinical assessment-color, temperature, capillary refill. The rats were daily inspected for any signs of infections. The nerve regeneration was assessed by foot print method. Results: There were no case of vascular compromise or autophagia. All rats had long term survival (>90 days. The nerve regeneration was clinically completed at 6 months postoperative. The mean operative time was 183 minutes, and ischemia time was 25 minutes.

  9. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This phenomenon is known as attentional dwell time (e.g. Duncan, Ward, Shapiro, 1994). All Previous studies of the attentional dwell time have looked at data averaged across subjects. In contrast, we have succeeded in running subjects for 3120 trials which has given us reliable data for modelling data from...... of attentional dwell time extends these mechanisms by proposing that the processing resources (cells) already engaged in a feedback loop (i.e. allocated to an object) are locked in VSTM and therefore cannot be allocated to other objects in the visual field before the encoded object has been released....... This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...

  10. Improving stability of stabilized and multiscale formulations in flow simulations at small time steps

    KAUST Repository

    Hsu, Ming-Chen

    2010-02-01

    The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.

  11. Avoid the tsunami of the Dirac sea in the imaginary time step method

    International Nuclear Information System (INIS)

    Zhang, Ying; Liang, Haozhao; Meng, Jie

    2010-01-01

    The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)

  12. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  13. Probabilistic Survivability Versus Time Modeling

    Science.gov (United States)

    Joyner, James J., Sr.

    2016-01-01

    This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.

  14. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  15. Detection of Tomato black ring virus by real-time one-step RT-PCR.

    Science.gov (United States)

    Harper, Scott J; Delmiglio, Catia; Ward, Lisa I; Clover, Gerard R G

    2011-01-01

    A TaqMan-based real-time one-step RT-PCR assay was developed for the rapid detection of Tomato black ring virus (TBRV), a significant plant pathogen which infects a wide range of economically important crops. Primers and a probe were designed against existing genomic sequences to amplify a 72 bp fragment from RNA-2. The assay amplified all isolates of TBRV tested, but no amplification was observed from the RNA of other nepovirus species or healthy host plants. The detection limit of the assay was estimated to be around nine copies of the TBRV target region in total RNA. A comparison with conventional RT-PCR and ELISA, indicated that ELISA, the current standard test method, lacked specificity and reacted to all nepovirus species tested, while conventional RT-PCR was approximately ten-fold less sensitive than the real-time RT-PCR assay. Finally, the real-time RT-PCR assay was tested using five different RT-PCR reagent kits and was found to be robust and reliable, with no significant differences in sensitivity being found. The development of this rapid assay should aid in quarantine and post-border surveys for regulatory agencies. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Modeling printed circuit board curvature in relation to manufacturing process steps

    NARCIS (Netherlands)

    Schuerink, G.A.; Slomp, M.; Wits, Wessel Willems; Legtenberg, R.; Legtenberg, R.; Kappel, E.A.

    2013-01-01

    This paper presents an analytical method to predict deformations of Printed Circuit Boards (PCBs) in relation to their manufacturing process steps. Classical Lamination Theory (CLT) is used as a basis. The model tracks internal stresses and includes the results of subsequent production steps, such

  17. Numerical Investigation of Transitional Flow over a Backward Facing Step Using a Low Reynolds Number k-ε Model

    DEFF Research Database (Denmark)

    Skovgaard, M.; Nielsen, Peter V.

    In this paper it is investigated if it is possible to simulate and capture some of the low Reynolds number effects numerically using time averaged momentum equations and a low Reynolds number k-f model. The test case is the larninar to turbulent transitional flow over a backward facing step...

  18. Continuous-Time Random Walk with multi-step memory: an application to market dynamics

    Science.gov (United States)

    Gubiec, Tomasz; Kutner, Ryszard

    2017-11-01

    An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  19. School Experience Comes Alive in an Integrated B.Ed. Programme. (A Step by Step Descriptive Model).

    Science.gov (United States)

    Kelliher, Marie H.; Balint, Margaret S.

    A model is presented of a preservice teacher education program (used at Polding College, Sydney, Australia) that consists of three years' academic preparation, and at least one year of classroom teaching experience, followed by one year of full-time study or its equivalent. The school experience is the core of this program and it is integrated…

  20. A Semi-Empirical Two Step Carbon Corrosion Reaction Model in PEM Fuel Cells

    Energy Technology Data Exchange (ETDEWEB)

    Young, Alan; Colbow, Vesna; Harvey, David; Rogers, Erin; Wessel, Silvia

    2013-01-01

    The cathode CL of a polymer electrolyte membrane fuel cell (PEMFC) was exposed to high potentials, 1.0 to 1.4 V versus a reversible hydrogen electrode (RHE), that are typically encountered during start up/shut down operation. While both platinum dissolution and carbon corrosion occurred, the carbon corrosion effects were isolated and modeled. The presented model separates the carbon corrosion process into two reaction steps; (1) oxidation of the carbon surface to carbon-oxygen groups, and (2) further corrosion of the oxidized surface to carbon dioxide/monoxide. To oxidize and corrode the cathode catalyst carbon support, the CL was subjected to an accelerated stress test cycled the potential from 0.6 VRHE to an upper potential limit (UPL) ranging from 0.9 to 1.4 VRHE at varying dwell times. The reaction rate constants and specific capacitances of carbon and platinum were fitted by evaluating the double layer capacitance (Cdl) trends. Carbon surface oxidation increased the Cdl due to increased specific capacitance for carbon surfaces with carbon-oxygen groups, while the second corrosion reaction decreased the Cdl due to loss of the overall carbon surface area. The first oxidation step differed between carbon types, while both reaction rate constants were found to have a dependency on UPL, temperature, and gas relative humidity.

  1. Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.

    Science.gov (United States)

    Rogers, Mark W; Mille, Marie-Laure

    2016-08-15

    Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  2. Two-dimensional modeling of stepped planing hulls with open and pressurized air cavities

    Directory of Open Access Journals (Sweden)

    Konstantin I. Matveev

    2012-06-01

    Full Text Available A method of hydrodynamic discrete sources is applied for two-dimensional modeling of stepped planing surfaces. The water surface deformations, wetted hull lengths, and pressure distribution are calculated at given hull attitude and Froude number. Pressurized air cavities that improve hydrodynamic performance can also be modeled with the current method. Presented results include validation examples, parametric calculations of a single-step hull, effect of trim tabs, and performance of an infinite series of periodic stepped surfaces. It is shown that transverse steps can lead to higher lift-drag ratio, although at reduced lift capability, in comparison with a stepless hull. Performance of a multi-step configuration is sensitive to the wave pattern between hulls, which depends on Froude number and relative hull spacing.

  3. The indirect link between perceived parenting and adolescent future orientation : A multiple-step model

    NARCIS (Netherlands)

    Seginer, R.; Vermulst, A.A.; Shoyer, S.

    2004-01-01

    The indirect links between perceived mothers' and fathers' autonomous-accepting parenting and future orientation were examined in a mediational model consisting of five steps: perceived mothers' and fathers' autonomous-accepting parenting, self-evaluation, and the motivational, cognitive

  4. Enriching step-based product information models to support product life-cycle activities

    Science.gov (United States)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  5. Probabilistic Survivability Versus Time Modeling

    Science.gov (United States)

    Joyner, James J., Sr.

    2015-01-01

    This technical paper documents Kennedy Space Centers Independent Assessment team work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer (CSO) and GSDO management during key programmatic reviews. The assessments provided the GSDO Program with an analysis of how egress time affects the likelihood of astronaut and worker survival during an emergency. For each assessment, the team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedys Vehicle Assembly Building (VAB).Based on the composite survivability versus time graphs from the first two assessments, there was a soft knee in the Figure of Merit graphs at eight minutes (ten minutes after egress ordered). Thus, the graphs illustrated to the decision makers that the final emergency egress design selected should have the capability of transporting the flight crew from the top of LC 39B to a safe location in eight minutes or less. Results for the third assessment were dominated by hazards that were classified as instantaneous in nature (e.g. stacking mishaps) and therefore had no effect on survivability vs time to egress the VAB. VAB emergency scenarios that degraded over time (e.g. fire) produced survivability vs time graphs that were line with aerospace industry norms.

  6. A Step Forward to Closing the Loop between Static and Dynamic Reservoir Modeling

    Directory of Open Access Journals (Sweden)

    Cancelliere M.

    2014-12-01

    Full Text Available The current trend for history matching is to find multiple calibrated models instead of a single set of model parameters that match the historical data. The advantage of several current workflows involving assisted history matching techniques, particularly those based on heuristic optimizers or direct search, is that they lead to a number of calibrated models that partially address the problem of the non-uniqueness of the solutions. The importance of achieving multiple solutions is that calibrated models can be used for a true quantification of the uncertainty affecting the production forecasts, which represent the basis for technical and economic risk analysis. In this paper, the importance of incorporating the geological uncertainties in a reservoir study is demonstrated. A workflow, which includes the analysis of the uncertainty associated with the facies distribution for a fluvial depositional environment in the calibration of the numerical dynamic models and, consequently, in the production forecast, is presented. The first step in the workflow was to generate a set of facies realizations starting from different conceptual models. After facies modeling, the petrophysical properties were assigned to the simulation domains. Then, each facies realization was calibrated separately by varying permeability and porosity fields. Data assimilation techniques were used to calibrate the models in a reasonable span of time. Results showed that even the adoption of a conceptual model for facies distribution clearly representative of the reservoir internal geometry might not guarantee reliable results in terms of production forecast. Furthermore, results also showed that realizations which seem fully acceptable after calibration were not representative of the true reservoir internal configuration and provided wrong production forecasts; conversely, realizations which did not show a good fit of the production data could reliably predict the reservoir

  7. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  8. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  9. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  10. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    the production trait evaluation of Nordic Red dairy cattle. Genotyped bulls with daughters are used as training animals, and genotyped bulls and producing cows as candidate animals. For simplicity, size of the data is chosen so that the full inverses of the mixed model equation coefficient matrices can......Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was...

  11. A Two-Step RKC Method for Time-Dependent PDEs

    International Nuclear Information System (INIS)

    Sommeijer, Ben; Verwer, Jan

    2008-01-01

    An integration method is discussed which has been designed to treat parabolic and hyperbolic terms explicitly and stiff reaction terms implicitly. The method is a special two-step form of the one-step IMEX (IMplicit-EXplicit) RKC (Runge-Kutta-Chebyshev) method. The special two-step form is introduced with the aim of getting a non-zero imaginary stability boundary which is zero for the one-step method. Having a non-zero imaginary stability boundary allows, for example, the integration of pure advection equations space-discretized with centered schemes, the integration of damped or viscous wave equations, the integration of coupled sound and heat flow equations, etc. For our class of methods it also simplifies the choice of temporal step sizes satisfying the von Neumann stability criterion, by embedding a thin long rectangle inside the stability region.

  12. STEPS: modeling and simulating complex reaction-diffusion systems with Python

    Directory of Open Access Journals (Sweden)

    Stefan Wils

    2009-06-01

    Full Text Available We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code.

  13. Finite element method for incompressible two-fluid model using a fractional step method

    International Nuclear Information System (INIS)

    Uchiyama, Tomomi

    1997-01-01

    This paper presents a finite element method for an incompressible two-fluid model. The solution algorithm is based on the fractional step method, which is frequently used in the finite element calculation for single-phase flows. The calculating domain is divided into quadrilateral elements with four nodes. The Galerkin method is applied to derive the finite element equations. Air-water two-phase flows around a square cylinder are calculated by the finite element method. The calculation demonstrates the close relation between the volumetric fraction of the gas-phase and the vortices shed from the cylinder, which is favorably compared with the existing data. It is also confirmed that the present method allows the calculation with less CPU time than the SMAC finite element method proposed in my previous paper. (author)

  14. Bayesian Inference for Step-Stress Partially Accelerated Competing Failure Model under Type II Progressive Censoring

    Directory of Open Access Journals (Sweden)

    Xiaolin Shi

    2016-01-01

    Full Text Available This paper deals with the Bayesian inference on step-stress partially accelerated life tests using Type II progressive censored data in the presence of competing failure causes. Suppose that the occurrence time of the failure cause follows Pareto distribution under use stress levels. Based on the tampered failure rate model, the objective Bayesian estimates, Bayesian estimates, and E-Bayesian estimates of the unknown parameters and acceleration factor are obtained under the squared loss function. To evaluate the performance of the obtained estimates, the average relative errors (AREs and mean squared errors (MSEs are calculated. In addition, the comparisons of the three estimates of unknown parameters and acceleration factor for different sample sizes and different progressive censoring schemes are conducted through Monte Carlo simulations.

  15. A coupled electro-thermo-mechanical FEM code for large scale problems including multi-domain and multiple time-step aspects

    OpenAIRE

    Menanteau, Laurent; Pantalé, Olivier; Caperaa, Serge

    2005-01-01

    This work concerns the development of a virtual prototyping tool for large scale electro-thermo-mechanical simulation of power converters used in railway transport including multi-domain and multiple time-steps aspects. For this purpose, Domain Decomposition Method (DDM) is used to give on one hand the ability to treat large scale problems and on the other hand, for transient analysis, the ability to use different time-steps in different parts of the numerical model. An Object-Oriented progra...

  16. Time Alignment as a Necessary Step in the Analysis of Sleep Probabilistic Curves

    Science.gov (United States)

    Rošt'áková, Zuzana; Rosipal, Roman

    2018-02-01

    Sleep can be characterised as a dynamic process that has a finite set of sleep stages during the night. The standard Rechtschaffen and Kales sleep model produces discrete representation of sleep and does not take into account its dynamic structure. In contrast, the continuous sleep representation provided by the probabilistic sleep model accounts for the dynamics of the sleep process. However, analysis of the sleep probabilistic curves is problematic when time misalignment is present. In this study, we highlight the necessity of curve synchronisation before further analysis. Original and in time aligned sleep probabilistic curves were transformed into a finite dimensional vector space, and their ability to predict subjects' age or daily measures is evaluated. We conclude that curve alignment significantly improves the prediction of the daily measures, especially in the case of the S2-related sleep states or slow wave sleep.

  17. A two-step framework for over-threshold modelling of environmental extremes

    Science.gov (United States)

    Bernardara, P.; Mazas, F.; Kergadallan, X.; Hamm, L.

    2014-03-01

    The evaluation of the probability of occurrence of extreme natural events is important for the protection of urban areas, industrial facilities and others. Traditionally, the extreme value theory (EVT) offers a valid theoretical framework on this topic. In an over-threshold modelling (OTM) approach, Pickands' theorem, (Pickands, 1975) states that, for a sample composed by independent and identically distributed (i.i.d.) values, the distribution of the data exceeding a given threshold converges through a generalized Pareto distribution (GPD). Following this theoretical result, the analysis of realizations of environmental variables exceeding a threshold spread widely in the literature. However, applying this theorem to an auto-correlated time series logically involves two successive and complementary steps: the first one is required to build a sample of i.i.d. values from the available information, as required by the EVT; the second to set the threshold for the optimal convergence toward the GPD. In the past, the same threshold was often employed both for sampling observations and for meeting the hypothesis of extreme value convergence. This confusion can lead to an erroneous understanding of methodologies and tools available in the literature. This paper aims at clarifying the conceptual framework involved in threshold selection, reviewing the available methods for the application of both steps and illustrating it with a double threshold approach.

  18. Sparse time series chain graphical models for reconstructing genetic networks

    NARCIS (Netherlands)

    Abegaz, Fentaw; Wit, Ernst

    We propose a sparse high-dimensional time series chain graphical model for reconstructing genetic networks from gene expression data parametrized by a precision matrix and autoregressive coefficient matrix. We consider the time steps as blocks or chains. The proposed approach explores patterns of

  19. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    Studies of the time course of visual attention have identified a temporary functional blindness to the second of two spatially separated targets: attending to one visual stimulus may lead to impairments in identifying a second stimulus presented between 200 to 500 ms after the first. This phenome......Studies of the time course of visual attention have identified a temporary functional blindness to the second of two spatially separated targets: attending to one visual stimulus may lead to impairments in identifying a second stimulus presented between 200 to 500 ms after the first....... This phenomenon is known as attentional dwell time (e.g. Duncan, Ward, Shapiro, 1994). All Previous studies of the attentional dwell time have looked at data averaged across subjects. In contrast, we have succeeded in running subjects for 3120 trials which has given us reliable data for modelling data from...... individual subjects. Our new model is based on the Theory of Visual Attention (TVA; Bundesen, 1990). TVA has previously been successful in explaining results from experiments where stimuli are presented simultaneously in the spatial domain (e.g. whole report and partial report) but has not yet been extended...

  20. Stability of the high-order finite elements for acoustic or elastic wave propagation with high-order time stepping

    KAUST Repository

    De Basabe, Jonás D.

    2010-04-01

    We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.

  1. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  2. A stochastic step model of replicative senescence explains ROS production rate in ageing cell populations.

    Directory of Open Access Journals (Sweden)

    Conor Lawless

    Full Text Available Increases in cellular Reactive Oxygen Species (ROS concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells.One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage. However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS. We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence.

  3. Kinetic Analysis of Parallel-Consecutive First-Order Reactions with a Reversible Step: Concentration-Time Integrals Method

    Science.gov (United States)

    Mucientes, A. E.; de la Pena, M. A.

    2009-01-01

    The concentration-time integrals method has been used to solve kinetic equations of parallel-consecutive first-order reactions with a reversible step. This method involves the determination of the area under the curve for the concentration of a given species against time. Computer techniques are used to integrate experimental curves and the method…

  4. One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values

    Directory of Open Access Journals (Sweden)

    Jin Xiao

    2014-01-01

    Full Text Available Scientific customer value segmentation (CVS is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM model. On the one hand, ODCEM integrates the preprocess of missing values and the classification modeling into one step; on the other hand, it utilizes multiple classifiers ensemble technology in constructing the classification models. The empirical results in credit scoring dataset “German” from UCI and the real customer churn prediction dataset “China churn” show that the ODCEM outperforms four commonly used “two-step” models and the ensemble based model LMF and can provide better decision support for market managers.

  5. Evaluation of hydrodynamic ocean models as a first step in larval dispersal modelling

    Science.gov (United States)

    Vasile, Roxana; Hartmann, Klaas; Hobday, Alistair J.; Oliver, Eric; Tracey, Sean

    2018-01-01

    Larval dispersal modelling, a powerful tool in studying population connectivity and species distribution, requires accurate estimates of the ocean state, on a high-resolution grid in both space (e.g. 0.5-1 km horizontal grid) and time (e.g. hourly outputs), particularly of current velocities and water temperature. These estimates are usually provided by hydrodynamic models based on which larval trajectories and survival are computed. In this study we assessed the accuracy of two hydrodynamic models around Australia - Bluelink ReANalysis (BRAN) and Hybrid Coordinate Ocean Model (HYCOM) - through comparison with empirical data from the Australian National Moorings Network (ANMN). We evaluated the models' predictions of seawater parameters most relevant to larval dispersal - temperature, u and v velocities and current speed and direction - on the continental shelf where spawning and nursery areas for major fishery species are located. The performance of each model in estimating ocean parameters was found to depend on the parameter investigated and to vary from one geographical region to another. Both BRAN and HYCOM models systematically overestimated the mean water temperature, particularly in the top 140 m of water column, with over 2 °C bias at some of the mooring stations. HYCOM model was more accurate than BRAN for water temperature predictions in the Great Australian Bight and along the east coast of Australia. Skill scores between each model and the in situ observations showed lower accuracy in the models' predictions of u and v ocean current velocities compared to water temperature predictions. For both models, the lowest accuracy in predicting ocean current velocities, speed and direction was observed at 200 m depth. Low accuracy of both model predictions was also observed in the top 10 m of the water column. BRAN had more accurate predictions of both u and v velocities in the upper 50 m of water column at all mooring station locations. While HYCOM

  6. One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values

    OpenAIRE

    Jin Xiao; Bing Zhu; Geer Teng; Changzheng He; Dunhu Liu

    2014-01-01

    Scientific customer value segmentation (CVS) is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM) model. On the one hand, ODCEM integrates the preprocess of missing values and the classif...

  7. Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2016-01-01

    Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive...... estimation approach that allows for the parameters of the estimated models to be time varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared daily returns that was previously believed to be the most difficult fact...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....

  8. Coconut Model for Learning First Steps of Craniotomy Techniques and Cerebrospinal Fluid Leak Avoidance.

    Science.gov (United States)

    Drummond-Braga, Bernardo; Peleja, Sebastião Berquó; Macedo, Guaracy; Drummond, Carlos Roberto S A; Costa, Pollyana H V; Garcia-Zapata, Marco T; Oliveira, Marcelo Magaldi

    2016-12-01

    Neurosurgery simulation has gained attention recently due to changes in the medical system. First-year neurosurgical residents in low-income countries usually perform their first craniotomy on a real subject. Development of high-fidelity, cheap, and largely available simulators is a challenge in residency training. An original model for the first steps of craniotomy with cerebrospinal fluid leak avoidance practice using a coconut is described. The coconut is a drupe from Cocos nucifera L. (coconut tree). The green coconut has 4 layers, and some similarity can be seen between these layers and the human skull. The materials used in the simulation are the same as those used in the operating room. The coconut is placed on the head holder support with the face up. The burr holes are made until endocarp is reached. The mesocarp is dissected, and the conductor is passed from one hole to the other with the Gigli saw. The hook handle for the wire saw is positioned, and the mesocarp and endocarp are cut. After sawing the 4 margins, mesocarp is detached from endocarp. Four burr holes are made from endocarp to endosperm. Careful dissection of the endosperm is done, avoiding liquid albumen leak. The Gigli saw is passed through the trephine holes. Hooks are placed, and the endocarp is cut. After cutting the 4 margins, it is dissected from the endosperm and removed. The main goal of the procedure is to remove the endocarp without fluid leakage. The coconut model for learning the first steps of craniotomy and cerebrospinal fluid leak avoidance has some limitations. It is more realistic while trying to remove the endocarp without damage to the endosperm. It is also cheap and can be widely used in low-income countries. However, the coconut does not have anatomic landmarks. The mesocarp makes the model less realistic because it has fibers that make the procedure more difficult and different from a real craniotomy. The model has a potential pedagogic neurosurgical application for

  9. Steps Towards an Operational Service Using Near Real-Time Altimeter Data

    Science.gov (United States)

    Ash, E. R.

    2006-07-01

    Thanks largely to modern computing power, numerical forecasts of w inds and waves over the oceans ar e ev er improving, offering greater accuracy and finer resolution in time and sp ace. Howev er, it is recognized that met-ocean models still have difficulty in accurately forecasting sever e w eather conditions, conditions that cause the most damag e and difficulty in mar itime operations. Ther efore a key requir emen t is to provid e improved information on sever e conditions. No individual measur emen t or prediction system is perfect. Offshore buoys provide a continuous long-ter m record of wind and wave conditions, but only at a limited numb er of sites. Satellite data offer all-weath er global cov erage, but with relatively infrequen t samp ling. Forecasts rely on imperf ect numerical schemes and the ab ility to manage a vast quantity of input data. Therefore the best system is one that integr ates information from all available sources, taking advantage of the benef its that each can offer. We report on an initiative supported by the European Space Agen cy (ESA) which investig ated how satellite data could be used to enhan ce systems to provide Near Real Time mon itor ing of met-ocean conditions.

  10. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  11. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  12. Novel three-step pseudo-absence selection technique for improved species distribution modelling.

    Directory of Open Access Journals (Sweden)

    Senait D Senay

    Full Text Available Pseudo-absence selection for spatial distribution models (SDMs is the subject of ongoing investigation. Numerous techniques continue to be developed, and reports of their effectiveness vary. Because the quality of presence and absence data is key for acceptable accuracy of correlative SDM predictions, determining an appropriate method to characterise pseudo-absences for SDM's is vital. The main methods that are currently used to generate pseudo-absence points are: 1 randomly generated pseudo-absence locations from background data; 2 pseudo-absence locations generated within a delimited geographical distance from recorded presence points; and 3 pseudo-absence locations selected in areas that are environmentally dissimilar from presence points. There is a need for a method that considers both geographical extent and environmental requirements to produce pseudo-absence points that are spatially and ecologically balanced. We use a novel three-step approach that satisfies both spatial and ecological reasons why the target species is likely to find a particular geo-location unsuitable. Step 1 comprises establishing a geographical extent around species presence points from which pseudo-absence points are selected based on analyses of environmental variable importance at different distances. This step gives an ecologically meaningful explanation to the spatial range of background data, as opposed to using an arbitrary radius. Step 2 determines locations that are environmentally dissimilar to the presence points within the distance specified in step one. Step 3 performs K-means clustering to reduce the number of potential pseudo-absences to the desired set by taking the centroids of clusters in the most environmentally dissimilar class identified in step 2. By considering spatial, ecological and environmental aspects, the three-step method identifies appropriate pseudo-absence points for correlative SDMs. We illustrate this method by predicting the New

  13. Rotordynamic analysis for stepped-labyrinth gas seals using moody's friction-factor model

    International Nuclear Information System (INIS)

    Ha, Tae Woong

    2001-01-01

    The governing equations are derived for the analysis of a stepped labyrinth gas seal generally used in high performance compressors, gas turbines, and steam turbines. The bulk-flow is assumed for a single cavity control volume set up in a stepped labyrinth cavity and the flow is assumed to be completely turbulent in the circumferential direction. The Moody's wall-friction-factor model is used for the calculation of wall shear stresses in the single cavity control volume. For the reaction force developed by the stepped labyrinth gas seal, linearized zeroth-order and first-order perturbation equations are developed for small motion about a centered position. Integration of the resultant first-order pressure distribution along and around the seal defines the rotordynamic coefficients of the stepped labyrinth gas seal. The resulting leakage and rotordynamic characteristics of the stepped labyrinth gas seal are presented and compared with Scharrer's theoretical analysis using Blasius' wall-friction-factor model. The present analysis shows a good qualitative agreement of leakage characteristics with Scharrer's analysis, but underpredicts by about 20 %. For the rotordynamic coefficients, the present analysis generally yields smaller predicted values compared with Scharrer's analysis

  14. Modelling noninvasively measured cerebral signals during a hypoxemia challenge: steps towards individualised modelling.

    Directory of Open Access Journals (Sweden)

    Beth Jelfs

    Full Text Available Noninvasive approaches to measuring cerebral circulation and metabolism are crucial to furthering our understanding of brain function. These approaches also have considerable potential for clinical use "at the bedside". However, a highly nontrivial task and precondition if such methods are to be used routinely is the robust physiological interpretation of the data. In this paper, we explore the ability of a previously developed model of brain circulation and metabolism to explain and predict quantitatively the responses of physiological signals. The five signals all noninvasively-measured during hypoxemia in healthy volunteers include four signals measured using near-infrared spectroscopy along with middle cerebral artery blood flow measured using transcranial Doppler flowmetry. We show that optimising the model using partial data from an individual can increase its predictive power thus aiding the interpretation of NIRS signals in individuals. At the same time such optimisation can also help refine model parametrisation and provide confidence intervals on model parameters. Discrepancies between model and data which persist despite model optimisation are used to flag up important questions concerning the underlying physiology, and the reliability and physiological meaning of the signals.

  15. Bereday and Hilker: Origins of the "Four Steps of Comparison" Model

    Science.gov (United States)

    Adick, Christel

    2018-01-01

    The article draws attention to the forgotten ancestry of the "four steps of comparison" model (description--interpretation--juxtaposition--comparison). Comparativists largely attribute this to George Z. F. Bereday [1964. "Comparative Method in Education." New York: Holt, Rinehart and Winston], but among German scholars, it is…

  16. Anticipated tt-bar states in the quark-confining two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.

    1980-12-01

    The mass spectrum, thresholds and decay widths of the anticipated tt-bar states are studied as a function of quark mass, in a simple analytically solvable, quark-confining two-step potential model, previously used for charmonium and bottonium. (author)

  17. The cc-bar and bb-bar spectroscopy in the two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.; Kaiserslautern Univ.

    1984-07-01

    We investigate the spectroscopy of the charmonium (cc-bar) and bottonium (bb-bar) bound states in a static flavour independent nonrelativistic quark-antiquark (qq-bar) two-step potential model proposed earlier. Our predictions are in good agreement with experimental data and with other theoretical predictions. (author)

  18. Time-stepping stability of continuous and discontinuous finite-element methods for 3-D wave propagation

    NARCIS (Netherlands)

    Mulder, W.A.; Zhebel, E.; Minisini, S.

    2013-01-01

    We analyse the time-stepping stability for the 3-D acoustic wave equation, discretized on tetrahedral meshes. Two types of methods are considered: mass-lumped continuous finite elements and the symmetric interior-penalty discontinuous Galerkin method. Combining the spatial discretization with the

  19. Two-step relaxation mode analysis with multiple evolution times applied to all-atom molecular dynamics protein simulation

    Science.gov (United States)

    Karasawa, N.; Mitsutake, A.; Takano, H.

    2017-12-01

    Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n ] polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μ s molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.

  20. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    International Nuclear Information System (INIS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  1. The Antibiotic Resistance Arrow of Time: Efflux Pump Induction Is a General First Step in the Evolution of Mycobacterial Drug Resistance

    OpenAIRE

    Schmalstieg, Aurelia M.; Srivastava, Shashikant; Belkaya, Serkan; Deshpande, Devyani; Meek, Claudia; Leff, Richard; van Oers, Nicolai S. C.; Gumbo, Tawanda

    2012-01-01

    We hypothesize that low-level efflux pump expression is the first step in the development of high-level drug resistance in mycobacteria. We performed 28-day azithromycin dose-effect and dose-scheduling studies in our hollow-fiber model of disseminated Mycobacterium avium-M. intracellulare complex. Both microbial kill and resistance emergence were most closely linked to the within-macrophage area under the concentration-time curve (AUC)/MIC ratio. Quantitative PCR revealed that subtherapeutic ...

  2. Model predictive control of two-step nitrification and its validation via short-cut nitrification tests.

    Science.gov (United States)

    Sui, Jun; Luo, Fan; Li, Jie

    2016-10-01

    Short-cut nitrification (SCN) is shown to be an attractive technology due to its savings in aeration and external carbon source addition cost. However, the shortage of excluding nitrite nitrogen as a model state in an Activated Sludge Model limits the model predictive control of biological nitrogen removal via SCN. In this paper, a two-step kinetic model was developed based on the introduction of pH and temperature as process controller, and it was implemented in an SBR reactor. The simulation results for optimizing operating conditions showed that with increasing of dissolved oxygen (DO) the rate of ammonia oxidation and nitrite accumulation firstly increased in order to achieve a SCN process. By further increasing DO, the SCN process can be transformed into a complete nitrification process. In addition, within a certain range, increasing sludge retention time and aeration time are beneficial to the accumulation of nitrite. The implementation results in the SBR reactor showed that the data predicted by the kinetic model are in agreement with the data obtained, which indicate that the two-step kinetic model is appropriate to simulate the ammonia removal and nitrite production kinetics.

  3. Long memory of financial time series and hidden Markov models with time-varying parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    Hidden Markov models are often used to capture stylized facts of daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior for the ability to reproduce the stylized...... facts have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time-varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared...... daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step predictions....

  4. Detailed models for timing and efficiency in resistive plate chambers

    CERN Document Server

    AUTHOR|(CDS)2067623; Lippmann, Christian

    2003-01-01

    We discuss detailed models for detector physics processes in Resistive Plate Chambers, in particular including the effect of attachment on the avalanche statistics. In addition, we present analytic formulas for average charges and intrinsic RPC time resolution. Using a Monte Carlo simulation including all the steps from primary ionization to the front-end electronics we discuss the dependence of efficiency and time resolution on parameters like primary ionization, avalanche statistics and threshold.

  5. Modeling the Step-like Response in the Upper Limbs of Hemiplegic Subjects for Evaluation of Spasticity

    Science.gov (United States)

    Uchiyama, Takanori; Uchida, Ryusei

    The purpose of this study is to develop a new modeling technique for quantitative evaluation of spasticity in the upper limbs of hemiplegic patients. Each subject lay on a bed, and his forearm was supported with a jig to measure the elbow joint angle. The subject was instructed to relax and not to resist the step-like load which was applied to extend the elbow joint. The elbow joint angle and electromyogram (EMG) of the biceps muscle, triceps muscle and brachioradialis muscle were measured. First, the step-like response was approximated with a proposed mathematical model based on musculoskeletal and physiological characteristics by the least square method. The proposed model involved an elastic component depending on both muscle activities and elbow joint angle. The responses were approximated well with the proposed model. Next, the torque generated by the elastic component was estimated. The normalized elastic torque was approximated with a dumped sinusoid by the least square method. The reciprocal of the time constant and the natural frequency of the normalized elastic torque were calculated and they varied depending on the grades of the modified Ashworth scale of the subjects. It was suggested that the proposed modeling technique would provide a good quantitative index of spasticity as shown in the relationship between the reciprocal of the time constant and the natural frequency.

  6. Modeling and experimental characterization of stepped and v-shaped (311) defects in silicon

    Energy Technology Data Exchange (ETDEWEB)

    Marqués, Luis A., E-mail: lmarques@ele.uva.es; Aboy, María [Departamento de Electrónica, Universidad de Valladolid, E.T.S.I. de Telecomunicación, 47011 Valladolid (Spain); Dudeck, Karleen J.; Botton, Gianluigi A. [Department of Materials Science and Engineering, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L7 (Canada); Knights, Andrew P. [Department of Engineering Physics, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L7 (Canada); Gwilliam, Russell M. [Surrey Ion Beam Centre, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom)

    2014-04-14

    We propose an atomistic model to describe extended (311) defects in silicon. It is based on the combination of interstitial and bond defect chains. The model is able to accurately reproduce not only planar (311) defects but also defect structures that show steps, bends, or both. We use molecular dynamics techniques to show that these interstitial and bond defect chains spontaneously transform into extended (311) defects. Simulations are validated by comparing with precise experimental measurements on actual (311) defects. The excellent agreement between the simulated and experimentally derived structures, regarding individual atomic positions and shape of the distinct structural (311) defect units, provides strong evidence for the robustness of the proposed model.

  7. Modeling single-file diffusion with step fractional Brownian motion and a generalized fractional Langevin equation

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    Single-file diffusion behaves as normal diffusion at small time and as subdiffusion at large time. These properties can be described in terms of fractional Brownian motion with variable Hurst exponent or multifractional Brownian motion. We introduce a new stochastic process called Riemann–Liouville step fractional Brownian motion which can be regarded as a special case of multifractional Brownian motion with a step function type of Hurst exponent tailored for single-file diffusion. Such a step fractional Brownian motion can be obtained as a solution of the fractional Langevin equation with zero damping. Various kinds of fractional Langevin equations and their generalizations are then considered in order to decide whether their solutions provide the correct description of the long and short time behaviors of single-file diffusion. The cases where the dissipative memory kernel is a Dirac delta function, a power-law function and a combination of these functions are studied in detail. In addition to the case where the short time behavior of single-file diffusion behaves as normal diffusion, we also consider the possibility of a process that begins as ballistic motion

  8. Modeling single-file diffusion with step fractional Brownian motion and a generalized fractional Langevin equation

    Science.gov (United States)

    Lim, S. C.; Teo, L. P.

    2009-08-01

    Single-file diffusion behaves as normal diffusion at small time and as subdiffusion at large time. These properties can be described in terms of fractional Brownian motion with variable Hurst exponent or multifractional Brownian motion. We introduce a new stochastic process called Riemann-Liouville step fractional Brownian motion which can be regarded as a special case of multifractional Brownian motion with a step function type of Hurst exponent tailored for single-file diffusion. Such a step fractional Brownian motion can be obtained as a solution of the fractional Langevin equation with zero damping. Various kinds of fractional Langevin equations and their generalizations are then considered in order to decide whether their solutions provide the correct description of the long and short time behaviors of single-file diffusion. The cases where the dissipative memory kernel is a Dirac delta function, a power-law function and a combination of these functions are studied in detail. In addition to the case where the short time behavior of single-file diffusion behaves as normal diffusion, we also consider the possibility of a process that begins as ballistic motion.

  9. From elementary steps to structural relaxation: a continuous-time random-walk analysis of a supercooled liquid.

    Science.gov (United States)

    Rubner, Oliver; Heuer, Andreas

    2008-07-01

    We show that the dynamics of supercooled liquids, analyzed from computer simulations of the binary mixture Lennard-Jones system, can be described in terms of a continuous-time random walk (CTRW). The required discretization comes from mapping the dynamics on transitions between metabasins. This yields a quantitative link between the elementary step and the full structural relaxation. The analysis involves a verification of the CTRW conditions as well as a quantitative test of the predictions. The wave-vector dependence of the relaxation time and the degree of nonexponentiality can be expressed in terms of the first moments of the waiting time distribution.

  10. Development of a three dimensional circulation model based on fractional step method

    Directory of Open Access Journals (Sweden)

    Mazen Abualtayef

    2010-03-01

    Full Text Available A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic and thermodynamic results were predicted. The numerically predicted amplitudes and phase angles were well consistent with the field observations.

  11. Time-stepped & discrete-event simulations of electromagnetic propulsion systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The existing plasma codes are ill suited for modeling of mixed resolution problems, such as the plasma sail, where the system under study comprises subsystems with...

  12. Time-stepped & discrete-event simulations of electromagnetic propulsion systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The existing plasma codes are ill suited for modeling of mixed resolution problems, such as the plasma sail, where the system under study comprises subsystems with...

  13. Time-stepped & discrete-event simulations of electromagnetic propulsion systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a new generation of electromagnetic simulation codes with mixed resolution modeling capabilities. The need for such codes arises in many fields...

  14. FIRST STEPS TOWARDS AN INTEGRATED CITYGML-BASED 3D MODEL OF VIENNA

    Directory of Open Access Journals (Sweden)

    G. Agugiaro

    2016-06-01

    This paper reports about the experiences done so far, it describes the test area and the available data sources, it shows and exemplifies the data integration issues, the strategies developed to solve them in order to obtain the integrated 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.

  15. A new methodology to determine kinetic parameters for one- and two-step chemical models

    Science.gov (United States)

    Mantel, T.; Egolfopoulos, F. N.; Bowman, C. T.

    1996-01-01

    In this paper, a new methodology to determine kinetic parameters for simple chemical models and simple transport properties classically used in DNS of premixed combustion is presented. First, a one-dimensional code is utilized to performed steady unstrained laminar methane-air flame in order to verify intrinsic features of laminar flames such as burning velocity and temperature and concentration profiles. Second, the flame response to steady and unsteady strain in the opposed jet configuration is numerically investigated. It appears that for a well determined set of parameters, one- and two-step mechanisms reproduce the extinction limit of a laminar flame submitted to a steady strain. Computations with the GRI-mech mechanism (177 reactions, 39 species) and multicomponent transport properties are used to validate these simplified models. A sensitivity analysis of the preferential diffusion of heat and reactants when the Lewis number is close to unity indicates that the response of the flame to an oscillating strain is very sensitive to this number. As an application of this methodology, the interaction between a two-dimensional vortex pair and a premixed laminar flame is performed by Direct Numerical Simulation (DNS) using the one- and two-step mechanisms. Comparison with the experimental results of Samaniego et al. (1994) shows a significant improvement in the description of the interaction when the two-step model is used.

  16. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  17. ns-μs Time-Resolved Step-Scan FTIR of ba3 Oxidoreductase from Thermus thermophilus: Protonic Connectivity of w941-w946-w927

    Directory of Open Access Journals (Sweden)

    Antonis Nicolaides

    2016-09-01

    Full Text Available Time-resolved step-scan FTIR spectroscopy has been employed to probe the dynamics of the ba3 oxidoreductase from Thermus thermophilus in the ns-μs time range and in the pH/pD 6–9 range. The data revealed a pH/pD sensitivity of the D372 residue and of the ring-A propionate of heme a3. Based on the observed transient changes a model in which the protonic connectivity of w941-w946-927 to the D372 and the ring-A propionate of heme a3 is described.

  18. Effects of Mg Addition with Natural Aging Time on Two-Step Aging Behavior in Al-Mg-Si Alloys.

    Science.gov (United States)

    Im, Jiwoo; Kim, JaeHwang

    2018-03-01

    Influence of Mg contents with the natural aging (NA) time on the two-step aging behavior in Al-Mg-Si alloys is studied. Hardness is gradually increased during NA in the 3M4S, whereas dramatic increase of hardness after NA for 3.6 ks is confirmed in the 9M4S. Similar peak hardness is confirmed between the two-step aged and single aged samples in the 3M4S. It means that there is no negative effect of two-step aging. On the other hand, the peak hardness is decreased for the naturally-aged sample compared with the single aged one in the 9M4S. Formation of Cluster (1) is accelerated by the Mg addition, resulting in the negative effect of two-step aging. Meanwhile, the formation of the precipitates is accelerated by Mg addition during aging at 170 °C. The precipitate formed at the peak hardness during aging at 170 °C after natural aging for 43.2 ks is identified as the β″ phase based on the high resolution transmission electron microscope observation.

  19. Time lags in biological models

    CERN Document Server

    MacDonald, Norman

    1978-01-01

    In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...

  20. Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Tabares-Velasco, Paulo Cesar

    2016-09-01

    Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.

  1. Stepping out of Time : Performing Queer Temporality, Memory, and Relationality in Timelining

    NARCIS (Netherlands)

    Kessel, van L.

    2017-01-01

    In their performance Timelining, Brennan Gerard and Ryan Kelly explore the ways in which intimate relationships are constituted in time. The performance consists of a memory game in which two performers retrace their shared history as a couple. Throughout the performance, the various actions

  2. One-step electrodeposition process of CuInSe2: Deposition time effect

    Indian Academy of Sciences (India)

    Administrator

    Electrodeposition; CuInSe2; deposition time; thin films. 1. Introduction. Chalcopyrite, CuInSe2, is considered one of the most im- portant semiconductors that can be used to make low-cost photovoltaic devices. It has high absorption coefficient. (> 105 cm–1) (Kavcar et al 1992; Huang et al 2004), rea- sonable work function ...

  3. Associations of reallocating sitting time into standing or stepping with glucose, insulin and insulin sensitivity: a cross-sectional analysis of adults at risk of type 2 diabetes.

    Science.gov (United States)

    Edwardson, Charlotte L; Henson, Joe; Bodicoat, Danielle H; Bakrania, Kishan; Khunti, Kamlesh; Davies, Melanie J; Yates, Thomas

    2017-01-13

    To quantify associations between sitting time and glucose, insulin and insulin sensitivity by considering reallocation of time into standing or stepping. Cross-sectional. Leicestershire, UK, 2013. Adults aged 30-75 years at high risk of impaired glucose regulation (IGR) or type 2 diabetes. 435 adults (age 66.8±7.4 years; 61.7% male; 89.2% white European) were included. Participants wore an activPAL3 monitor 24 hours/day for 7 days to capture time spent sitting, standing and stepping. Fasting and 2-hour postchallenge glucose and insulin were assessed; insulin sensitivity was calculated by Homeostasis Model Assessment of Insulin Secretion (HOMA-IS) and Matsuda-Insulin Sensitivity Index (Matsuda-ISI). Isotemporal substitution regression modelling was used to quantify associations of substituting 30 min of waking sitting time (accumulated in prolonged (≥30 min) or short (sitting to short sitting time and to standing was associated with 4% lower fasting insulin and 4% higher HOMA-IS; reallocation of prolonged sitting to standing was also associated with a 5% higher Matsuda-ISI. Reallocation to stepping was associated with 5% lower 2-hour glucose, 7% lower fasting insulin, 13% lower 2-hour insulin and a 9% and 16% higher HOMA-IS and Matsuda-ISI, respectively. Reallocation of short sitting time to stepping was associated with 5% and 10% lower 2-hour glucose and 2-hour insulin and 12% higher Matsuda-ISI. Results were not modified by IGR status or sex. Reallocating a small amount of short or prolonged sitting time with standing or stepping may improve 2-hour glucose, fasting and 2-hour insulin and insulin sensitivity. Findings should be confirmed through prospective and intervention research. ISRCTN31392913, Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Comparison of Different Turbulence Models for Numerical Simulation of Pressure Distribution in V-Shaped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Zhaoliang Bai

    2017-01-01

    Full Text Available V-shaped stepped spillway is a new shaped stepped spillway, and the pressure distribution is quite different from that of the traditional stepped spillway. In this paper, five turbulence models were used to simulate the pressure distribution in the skimming flow regimes. Through comparing with the physical value, the realizable k-ε model had better precision in simulating the pressure distribution. Then, the flow pattern of V-shaped and traditional stepped spillways was given to illustrate the unique pressure distribution using realizable k-ε turbulence model.

  5. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-03

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step  convergence is applicable.

  6. Formal Modeling and Analysis of Timed Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real......-time systems, discrete time systems, timed languages, and real-time operating systems....

  7. Effect of moisture and drying time on the bond strength of the one-step self-etching adhesive system

    Directory of Open Access Journals (Sweden)

    Yoon Lee

    2012-08-01

    Full Text Available Objectives To investigate the effect of dentin moisture degree and air-drying time on dentin-bond strength of two different one-step self-etching adhesive systems. Materials and Methods Twenty-four human third molars were used for microtensile bond strength testing of G-Bond and Clearfil S3 Bond. The dentin surface was either blot-dried or air-dried before applying these adhesive agents. After application of the adhesive agent, three different air drying times were evaluated: 1, 5, and 10 sec. Composite resin was build up to 4 mm thickness and light cured for 40 sec with 2 separate layers. Then the tooth was sectioned and trimmed to measure the microtensile bond strength using a universal testing machine. The measured bond strengths were analyzed with three-way ANOVA and regression analysis was done (p = 0.05. Results All three factors, materials, dentin wetness and air drying time, showed significant effect on the microtensile bond strength. Clearfil S3 Bond, dry dentin surface and 10 sec air drying time showed higher bond strength. Conclusions Within the limitation of this experiment, air drying time after the application of the one-step self-etching adhesive agent was the most significant factor affecting the bond strength, followed by the material difference and dentin moisture before applying the adhesive agent.

  8. Structure of turbulent non-premixed flames modeled with two-step chemistry

    Science.gov (United States)

    Chen, J. H.; Mahalingam, S.; Puri, I. K.; Vervisch, L.

    1992-01-01

    Direct numerical simulations of turbulent diffusion flames modeled with finite-rate, two-step chemistry, A + B yields I, A + I yields P, were carried out. A detailed analysis of the turbulent flame structure reveals the complex nature of the penetration of various reactive species across two reaction zones in mixture fraction space. Due to this two zone structure, these flames were found to be robust, resisting extinction over the parameter ranges investigated. As in single-step computations, mixture fraction dissipation rate and the mixture fraction were found to be statistically correlated. Simulations involving unequal molecular diffusivities suggest that the small scale mixing process and, hence, the turbulent flame structure is sensitive to the Schmidt number.

  9. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    Energy Technology Data Exchange (ETDEWEB)

    Finn, John M., E-mail: finn@lanl.gov [T-5, Applied Mathematics and Plasma Physics, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

    2015-03-15

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012

  10. Hepatoprotective and anti-fibrotic agents: It’s time to take the next step

    Directory of Open Access Journals (Sweden)

    Ralf eWeiskirchen

    2016-01-01

    Full Text Available Hepatic fibrosis and cirrhosis cause strong human suffering and necessitate a monetary burden worldwide. Therefore, there is an urgent need for the development of therapies. Pre-clinical animal models are indispensable in the drug discovery and development of new anti-fibrotic compounds and are immensely valuable for understanding and proofing the mode of their proposed action. In fibrosis research, inbreed mice and rats are by far the most used species for testing drug efficacy. During the last decades, several hundred or even a thousand different drugs that reproducibly evolve beneficial effects on liver health in respective disease models were identified. However, there are only a few compounds (e.g. GR-MD-02, GM-CT-01 that were translated from bench to bedside. In contrast, the large number of drugs successfully tested in animal studies is repeatedly tested over and over engender findings with similar or identical outcome. This circumstance undermines the 3R (Replacement, Refinement, Reduction principle of Russell and Burch that was introduced to minimize the suffering of laboratory animals. This ethical framework, however, represents the basis of the new animal welfare regulations in the member states of the European Union. Consequently, the legal authorities in the different countries are halted to foreclose testing of drugs in animals that were successfully tested before. This review provides a synopsis on anti-fibrotic compounds that were tested in classical rodent models. Their mode of action, potential sources and the observed beneficial effects on liver health are discussed. This review attempts to provide a reference compilation for all those involved in the testing of drugs or in the design of new clinical trials targeting hepatic fibrosis.

  11. NIH and WHI: time for a mea culpa and steps beyond.

    Science.gov (United States)

    Utian, Wulf H

    2007-01-01

    The termination of the estrogen-progestin arm of the Women's Health Initiative (WHI) 5 years ago was abrupt and poorly planned. It has also become manifestly clear that the reporting at that time of the balance of risk and benefit for perimenopausal and early postmenopausal women was grossly exaggerated. Subsequent WHI publications including subanalyses of original data suggest a persistent pattern of over-reading of results toward a negative bias. The initial 2002 conclusion of the WHI investigators that harm was greater than benefit appears to be the result of several factors. One was the failure to recognize that initiation of therapy by decade of age or time since menopause was highly relevant; the WHI committee aggregated all outcome data into one group, even though in their demographic description they had the ability to investigate by age. An overhanging question is, therefore, what did they know, and when did they know it? Another factor was the utilization of a nonvalidated index termed the "global health index" that inexplicably assumed for comparison sake that all diseases were equivalent, for example, that a stroke was equivalent to a hip fracture in morbidity, mortality, and impact on quality of life. Although not a study about menopause, the data were extrapolated to all peri- and postmenopausal women. Despite the overall positive outcome of their results for women aged 50 to 60 years, most particularly those receiving estrogen-only therapy, the WHI investigators have irrationally maintained a defense of their misinterpretations of 2002. It is time for the National Institutes of Health and the WHI investigators to issue a final overall report that is clear and based on their actual results and not their personal interpretations. There is too much relevant and important information within the WHI to allow the overall study to continue to be perceived as biased to the detriment of both the National Institutes of Health and the study itself.

  12. First steps towards real-time radiography at the NECTAR facility

    Energy Technology Data Exchange (ETDEWEB)

    Buecherl, T. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)], E-mail: thomas.buecherl@radiochemie.de; Wagner, F.M. [Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II), Technische Universitaet Muenchen (Germany); Lierse von Gostomski, Ch. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)

    2009-06-21

    The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm{sup -2} s{sup -1} (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.

  13. One-step Real-time Food Quality Analysis by Simultaneous DSC-FTIR Microspectroscopy.

    Science.gov (United States)

    Lin, Shan-Yang; Lin, Chih-Cheng

    2016-01-01

    This review discusses an analytical technique that combines differential scanning calorimetry and Fourier-transform infrared (DSC-FTIR) microspectroscopy, which simulates the accelerated stability test and detects decomposition products simultaneously in real time. We show that the DSC-FTIR technique is a fast, simple and powerful analytical tool with applications in food sciences. This technique has been applied successfully to the simultaneous investigation of: encapsulated squid oil stability; the dehydration and intramolecular condensation of sweetener (aspartame); the dehydration, rehydration and solidification of trehalose; and online monitoring of the Maillard reaction for glucose (Glc)/asparagine (Asn) in the solid state. This technique delivers rapid and appropriate interpretations with food science applications.

  14. A four-step model: Initiating writing development at faculty level using Master’s thesis workshops as a vehicle

    DEFF Research Database (Denmark)

    Jensen, Tine Wirenfeldt; Jensen, Eva Naur; Bay, Gina

    traditions for addressing academic writing development at a central level. This paper presents a four-step model for initiating development of academic writing skills at such faculties. The model was developed, tested and evaluated in the fall 2015 in collaboration with all seven departments at Aarhus......In recent years, Danish university education has seen a rise of regulation from central government, intended to significantly reduce student’s degree completion time (The Study Progress Reform, 2013). One of the many effects of the reform is a reduction of the time available for students to write...... a Master’s thesis, as well as less flexibility regarding when the Master’s thesis process begins and ends. The reform has created an immediate need for increased support of academic writing development. This presents a challenge to all faculties, but especially those without writing centers or prior...

  15. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic, and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. In this paper, we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neuronic-related time-step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  16. Time-step selection considerations in the analysis of reactor transients with DIF3D-K

    International Nuclear Information System (INIS)

    Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.

    1993-01-01

    The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. Here we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neutronics-related time step selection algorithms and their influence on the accuracy and efficiency of the various solution options

  17. Validity and reliability of a simple 'low-tech' test for measuring choice stepping reaction time in older people.

    Science.gov (United States)

    Delbaere, K; Gschwind, Y J; Sherrington, C; Barraclough, E; Garrués-Irisarri, M A; Lord, S R

    2016-11-01

    To establish the psychometric properties of a simple 'low-tech' choice stepping reaction time test (CSRT-M) by investigating its validity and test-retest reliability. Cross-sectional. Community. A total of 169 older people from the control arm of a clinical trial and a convenience sample of 30 older people. Demographic, physical, cognitive and prospective falls data were collected in addition to CSRT-M. The CSRT-M time was taken as the total time to complete 20 steps onto four targets printed on a portable rubber mat. Assessment of the original electronic version (CSRT-E) and re-administration of the CSRT-M the next day was done in 30 participants. Multivariate regression analysis showed that the CSRT-M time was best explained by leaning balance control, quadriceps strength and cognitive functioning (R 2  = 0.44). Performance on the CSRT-M was worse in older participants and participants with a presence of fall risk factors, supporting good discriminant validity. The odds of suffering multiple future falls increased by 74% (odds ratio (OR) = 1.74, 95% CI (confidence interval) = 1.14-2.65, p = 0.010) for each standard deviation increase in CSRT-M, supporting good predictive validity. Criterion validity was confirmed by a strong bivariate correlation between CSRT-M and CSRT-E (0.81, p Test-retest reliability for the CSRT-M was good (intraclass correlation coefficient = 0.74, 95% CI = 0.45-0.88, p test of unplanned volitional stepping (CSRT-M) has excellent predictive validity for future falls, good inter-day test-retest reliability and excellent criterion validity with respect to the well-validated CSRT-E. The CSRT-M, therefore, may be a useful fall risk screening tool for older people.

  18. Effect of increased exposure times on amount of residual monomer released from single-step self-etch adhesives.

    Science.gov (United States)

    Altunsoy, Mustafa; Botsali, Murat Selim; Tosun, Gonca; Yasar, Ahmet

    2015-10-16

    The aim of this study was to evaluate the effect of increased exposure times on the amount of residual Bis-GMA, TEGDMA, HEMA and UDMA released from single-step self-etch adhesive systems. Two adhesive systems were used. The adhesives were applied to bovine dentin surface according to the manufacturer's instructions and were polymerized using an LED curing unit for 10, 20 and 40 seconds (n = 5). After polymerization, the specimens were stored in 75% ethanol-water solution (6 mL). Residual monomers (Bis-GMA, TEGDMA, UDMA and HEMA) that were eluted from the adhesives (after 10 minutes, 1 hour, 1 day, 7 days and 30 days) were analyzed by high-performance liquid chromatography (HPLC). The data were analyzed using 1-way analysis of variance and Tukey HSD tests. Among the time periods, the highest amount of released residual monomers from adhesives was observed in the 10th minute. There were statistically significant differences regarding released Bis-GMA, UDMA, HEMA and TEGDMA between the adhesive systems (p<0.05). There were no significant differences among the 10, 20 and 40 second polymerization times according to their effect on residual monomer release from adhesives (p>0.05). Increasing the polymerization time did not have an effect on residual monomer release from single-step self-etch adhesives.

  19. Real time wave forecasting using wind time history and numerical model

    Science.gov (United States)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  20. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  1. The next step in real time data processing for large scale physics experiments

    CERN Document Server

    Paramesvaran, Sudarshan

    2016-01-01

    Run 2 of the LHC represents one of the most challenging scientific environments for real time data analysis and processing. The steady increase in instantaneous luminosity will result in the CMS detector producing around 150 TB/s of data, only a small fraction of which is useful for interesting Physics studies. During 2015 the CMS collaboration will be completing a total upgrade of its Level 1 Trigger to deal with these conditions. In this talk a description of the major components of this complex system will be described. This will include a discussion of custom-designed electronic processing boards, built to the uTCA specification with AMC cards based on Xilinx 7 FPGAs and a network of high-speed optical links. In addition, novel algorithms will be described which deliver excellent performance in FPGAs and are combined with highly stable software frameworks to ensure a minimal risk of downtime. This upgrade is planned to take data from 2016. However a system of parallel running has been developed that will ...

  2. In the time of significant generational diversity - surgical leadership must step up!

    Science.gov (United States)

    Money, Samuel R; O'Donnell, Mark E; Gray, Richard J

    2014-02-01

    The diverse attitudes and motivations of surgeons and surgical trainees within different age groups present an important challenge for surgical leaders and educators. These challenges to surgical leadership are not unique, and other industries have likewise needed to grapple with how best to manage these various age groups. The authors will herein explore management and leadership for surgeons in a time of age diversity, define generational variations within "Baby-Boomer", "Generation X" and "Generation Y" populations, and identify work ethos concepts amongst these three groups. The surgical community must understand and embrace these concepts in order to continue to attract a stellar pool of applicants from medical school. By not accepting the changing attitudes and motivations of young trainees and medical students, we may disenfranchise a high percentage of potential future surgeons. Surgical training programs will fill, but will they contain the highest quality trainees? Copyright © 2013 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  3. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    Science.gov (United States)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1

  4. Electrochemical model of polyaniline-based memristor with mass transfer step

    International Nuclear Information System (INIS)

    Demin, V.A.; Erokhin, V.V.; Kashkarov, P.K.; Kovalchuk, M.V.

    2015-01-01

    The electrochemical organic memristor with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, such as the new field-programmable gate arrays or the neuromorphic networks capable for learning. In this work a new theoretical model of the polyaniline memristor is presented. The developed model of organic memristor functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device including the mass transfer step of ionic reactants. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment, but also quantitative similarities of the resultant current values. This model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices

  5. A simple one-step chemistry model for partially premixed hydrocarbon combustion

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Tarrazo, Eduardo [Instituto Nacional de Tecnica Aeroespacial, Madrid (Spain); Sanchez, Antonio L. [Area de Mecanica de Fluidos, Universidad Carlos III de Madrid, Leganes 28911 (Spain); Linan, Amable [ETSI Aeronauticos, Pl. Cardenal Cisneros 3, Madrid 28040 (Spain); Williams, Forman A. [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093-0411 (United States)

    2006-10-15

    This work explores the applicability of one-step irreversible Arrhenius kinetics with unity reaction order to the numerical description of partially premixed hydrocarbon combustion. Computations of planar premixed flames are used in the selection of the three model parameters: the heat of reaction q, the activation temperature T{sub a}, and the preexponential factor B. It is seen that changes in q with equivalence ratio f need to be introduced in fuel-rich combustion to describe the effect of partial fuel oxidation on the amount of heat released, leading to a universal linear variation q(f) for f>1 for all hydrocarbons. The model also employs a variable activation temperature T{sub a}(f) to mimic changes in the underlying chemistry in rich and very lean flames. The resulting chemistry description is able to reproduce propagation velocities of diluted and undiluted flames accurately over the whole flammability limit. Furthermore, computations of methane-air counterflow diffusion flames are used to test the proposed chemistry under nonpremixed conditions. The model not only predicts the critical strain rate at extinction accurately but also gives near-extinction flames with oxygen leakage, thereby overcoming known predictive limitations of one-step Arrhenius kinetics. (author)

  6. Detection of Listeria monocytogenes in ready-to-eat food by Step One real-time polymerase chain reaction.

    Science.gov (United States)

    Pochop, Jaroslav; Kačániová, Miroslava; Hleba, Lukáš; Lopasovský, L'ubomír; Bobková, Alica; Zeleňáková, Lucia; Stričík, Michal

    2012-01-01

    The aim of this study was to follow contamination of ready-to-eat food with Listeria monocytogenes by using the Step One real time polymerase chain reaction (PCR). We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and MicroSEQ® Listeria monocytogenes Detection Kit for the real-time PCR performance. In 30 samples of ready-to-eat milk and meat products without incubation we detected strains of Listeria monocytogenes in five samples (swabs). Internal positive control (IPC) was positive in all samples. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in ready-to-eat food without incubation.

  7. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  8. MODELLING OF ORDINAL TIME SERIES BY PROPORTIONAL ODDS MODEL

    Directory of Open Access Journals (Sweden)

    Serpil AKTAŞ ALTUNAY

    2013-06-01

    Full Text Available Categorical time series data with random time dependent covariates often arise when the variable categories are assigned as categorical. There are several other models that have been proposed in the literature for the analysis of categorical time series. For example, Markov chain models, integer autoregressive processes, discrete ARMA models can be utilized for modeling of categorical time series. In general, the choice of model depends on the measurement of study variables: nominal, ordinal and interval. However, regression theory is successful approach for categorical time series which is based on generalized linear models and partial likelihood inference. One of the models for ordinal time series in regression theory is proportional odds model. In this study, proportional odds model approach to ordinal categorical time series is investigated based on a real air pollution data set and the results are discussed.

  9. Model of cap-dependent translation initiation in sea urchin: a step towards the eukaryotic translation regulation network.

    Science.gov (United States)

    Bellé, Robert; Prigent, Sylvain; Siegel, Anne; Cormier, Patrick

    2010-03-01

    The large and rapid increase in the rate of protein synthesis following fertilization of the sea urchin egg has long been a paradigm of translational control, an important component of the regulation of gene expression in cells. This translational up-regulation is linked to physiological changes that occur upon fertilization and is necessary for entry into first cell division cycle. Accumulated knowledge on cap-dependent initiation of translation makes it suited and timely to start integrating the data into a system view of biological functions. Using a programming environment for system biology coupled with model validation (named Biocham), we have built an integrative model for cap-dependent initiation of translation. The model is described by abstract rules. It contains 51 reactions involved in 74 molecular complexes. The model proved to be coherent with existing knowledge by using queries based on computational tree logic (CTL) as well as Boolean simulations. The model could simulate the change in translation occurring at fertilization in the sea urchin model. It could also be coupled with an existing model designed for cell-cycle control. Therefore, the cap-dependent translation initiation model can be considered a first step towards the eukaryotic translation regulation network.

  10. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mather, Barry

    2017-08-24

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.

  11. Ozonisation of model compounds as a pretreatment step for the biological wastewater treatment

    International Nuclear Information System (INIS)

    Degen, U.

    1979-11-01

    Biological degradability and toxicity of organic substances are two basic criteria determining their behaviour in natural environment and during the biological treatment of waste waters. In this work oxidation products of model compounds (p-toluenesulfonic acid, benzenesulfonic acid and aniline) generated by ozonation were tested in a two step laboratory plant with activated sludge. The organic oxidation products and the initial compounds were the sole source of carbon for the microbes of the adapted activated sludge. The progress of elimination of the compounds was studied by measuring DOC, COD, UV-spectra of the initial compounds and sulfate. Initial concentrations of the model compounds were 2-4 mmole/1 with 25-75ion of sulfonic acids. As oxidation products of p-toluenesulfonic acid the following compounds were identified and quantitatively measured: methylglyoxal, pyruvic acid, oxalic acid, acetic acid, formic acid and sulfate. With all the various solutions with different concentrations of initial compounds and oxidation products the biological activity in the two step laboratory plant could maintain. p-Toluenesulfonic acid and the oxidation products are biologically degraded. The degradation of p-toluenesulfonic acid is measured by following the increasing of the sulfate concentration after biological treatment. This shows that the elimination of p-toluenesulfonic acid is not an adsorption but a mineralization step. At high p-toluenesulfonic acid concentration and low concentration of oxidation products p-toluenesulfonic acid is eliminated with a high efficiency (4.3 mole/d m 3 = 0.34 kg p-toluenesulfonic acid/d m 3 ). However at high concentration of oxidation products p-toluenesulfonic acid is less degraded. The oxidation products are always degraded with an elimination efficiency of 70%. A high load of biologically degradable oxidation products diminished the elimination efficiency of p-toluenesulfonic acid. (orig.) [de

  12. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic......In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...

  13. Generalized equivalent circuit model for ultra wideband antenna structure with double steps for energy scavenging

    International Nuclear Information System (INIS)

    Heong, Oon Kheng; Hock, Goh Chin; Chakrabarty, Chandan Kumar; Hock, Goh Tian

    2013-01-01

    There are various types of UWB antennas can be used to scavenge energy from the air and one of them is the printed disc monopole antenna. One of the new challenges imposed on ultra wideband is the design of a generalized antenna circuit model. It is developed in order to extract the inductance and capacitance values of the UWB antennas. In this research work, the developed circuit model can be used to represent the rectangular printed disc monopole antenna with double steps. The antenna structure is simulated with CST Microwave Studio, while the circuit model is simulated with AWR Microwave Office. In order to ensure the simulation result from the circuit model is accurate, the circuit model is also simulated using Mathlab program. The developed circuit model is found to be able to depict the actual UWB antenna. Energy harvesting from environmental wirelessly is an emerging method, which forms a promising alternative to existing energy scavenging system. The developed UWB can be used to scavenge wideband energy from electromagnetic wave present in the environment.

  14. Modeling and analysis of the affinity filtration process, including broth feeding, washing, and elution steps.

    Science.gov (United States)

    He, L Z; Dong, X Y; Sun, Y

    1998-01-01

    Affinity filtration is a developing protein purification technique that combines the high selectivity of affinity chromatography and the high processing speed of membrane filtration. In this work a lumped kinetic model was developed to describe the whole affinity filtration process, including broth feeding, contaminant washing, and elution steps. Affinity filtration experiments were conducted to evaluate the model using bovine serum albumin as a model protein and a highly substituted Blue Sepharose as an affinity adsorbent. The model with nonadjustable parameters agreed fairly to the experimental results. Thus, the performance of the affinity filtration in processing a crude broth containing contaminant proteins was analyzed by computer simulations using the lumped model. The simulation results show that there is an optimal protein loading for obtaining the maximum recovery yield of the desired protein with a constant purity at each operating condition. The concentration of a crude broth is beneficial in increasing the recovery yield of the desired protein. Using a constant amount of the affinity adsorbent, the recovery yield can be enhanced by decreasing the solution volume in the stirred tank due to the increase of the adsorbent weight fraction. It was found that the lumped kinetic model was simple and useful in analyzing the whole affinity filtration process.

  15. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    Science.gov (United States)

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  16. TWO-STEP ALGORITHM OF TRAINING INITIALIZATION FOR ACOUSTIC MODELS BASED ON DEEP NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    I. P. Medennikov

    2016-03-01

    Full Text Available This paper presents a two-step initialization algorithm for training of acoustic models based on deep neural networks. The algorithm is focused on reducing the impact of the non-speech segments on the acoustic model training. The idea of the proposed algorithm is to reduce the percentage of non-speech examples in the training set. Effectiveness evaluation of the algorithm has been carried out on the example of English spontaneous telephone speech recognition (Switchboard. The application of the proposed algorithm has led to 3% relative word error rate reduction, compared with the training initialization by restricted Boltzmann machines. The results presented in the paper can be applied in the development of automatic speech recognition systems.

  17. Effect of different air-drying time on the microleakage of single-step self-etch adhesives

    Directory of Open Access Journals (Sweden)

    Horieh Moosavi

    2013-05-01

    Full Text Available Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO, Clearfil S3 Bond (CSB, Bond Force (BF. Each main group divided into three subgroups regarding the air-drying time: without application of air stream, following the manufacturer's instruction, for 10 sec more than manufacturer's instruction. After completion of restorations, specimens were thermocycled and then connected to a fluid filtration system to evaluate microleakage. The data were statistically analyzed using two-way ANOVA and Tukey-test (α = 0.05. Results The microleakage of all adhesives decreased when the air-drying time increased from 0 sec to manufacturer's instruction (p < 0.001. The microleakage of BF reached its lowest values after increasing the drying time to 10 sec more than the manufacturer's instruction (p < 0.001. Microleakage of OBAO and CSB was significantly lower compared to BF in all three drying time (p < 0.001. Conclusions Increasing in air-drying time of adhesive layer in one-step self-etch adhesives caused reduction of microleakage, but the amount of this reduction may be dependent on the adhesive components of self-etch adhesives.

  18. Discounting Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....

  19. Finite element time domain modeling of controlled-Source electromagnetic data with a hybrid boundary condition

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin

    2017-01-01

    method which is unconditionally stable. We solve the diffusion equation for the electric field with a total field formulation. The finite element system of equation is solved using the direct method. The solutions of electric field, at different time, can be obtained using the effective time stepping...... method with trivial computation cost once the matrix is factorized. We try to keep the same time step size for a fixed number of steps using an adaptive time step doubling (ATSD) method. The finite element modeling domain is also truncated using a semi-adaptive method. We proposed a new boundary...... condition based on approximating the total field on the modeling boundary using the primary field corresponding to a layered background model. We validate our algorithm using several synthetic model studies....

  20. Multivariate modelling of the pharmaceutical two-step process of wet granulation and tableting with multiblock partial least squares

    NARCIS (Netherlands)

    Westerhuis, J.A; Coenegracht, P.M J

    1997-01-01

    The pharmaceutical process of wet granulation and tableting is described as a two-step process. Besides the process variables of both steps and the composition variables of the powder mixture, the physical properties of the intermediate granules are also used to model the crushing strength and

  1. Specification of a STEP Based Reference Model for Exchange of Robotics Models

    DEFF Research Database (Denmark)

    Haenisch, Jochen; Kroszynski, Uri; Ludwig, Arnold

    combining geometric, dynamic, process and robot specific data.The growing need for accurate information about manufacturing data (models of robots and other mechanisms) in diverse industrial applications has initiated ESPRIT Project 6457: InterRob. Besides the topics associated with standards for industrial...... of pilot processor programs are based. The processors allow for the exchange of product data models between Analysis systems (e.g. ADAMS), CAD systems (e.g. CATIA, BRAVO), Simulation and off-line programming systems (e.g. GRASP, KISMET, ROPSIM)....

  2. Determination of the isotopic abundance of iron and nickel with two-step laser time-of-flight mass spectrometry

    International Nuclear Information System (INIS)

    Zhou Mingfei; Qin Qizong

    1996-01-01

    Two-step laser time-of-flight mass spectrometry is established to measure the isotopic abundance of iron and nickel. A pulsed Nd:YAG laser (532 nm) is used to evaporate iron and nickel atoms from a Meral alloy sample. These atoms are ionized by a Nd:YAG pumped pulsed dye laser at 300.7 nm and 303.4 nm via resonance ionization and subsequently detected by time-of-flight mass spectrometry. Arrival time distributions are obtained by varying the time delay between evaporation laser and ionization laser. Overlap of the pulsed atomic beam and the ionization laser beam is excellent with about 1% effective duty cycle, then the sensitivity of isotopic detection will be increased substantially using resonance ionization time-of-flight mass spectrometry. A one color two-photon (1 + 1) ionization scheme is employed for Fe and Ni atoms. The results show that all the measured isotopic abundances of Fe and Ni are in agreement with values published in the literature

  3. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  4. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  5. Recruitment to a university alcohol program: evaluation of social marketing theory and stepped approach model.

    Science.gov (United States)

    Gries, J A; Black, D R; Coster, D C

    1995-07-01

    This study was a first initiative to evaluate the application of social marketing theory (SMT) to increase attendance at an alcohol abuse education program for university residence hall students and to ascertain whether aggressive recruitment strategies are necessary as part of the stepped approach model (SAM) of service delivery. SMT and public health strategies that include focus groups, in-depth interviews, and intercept interviews were used to develop recruitment materials in a Test Hall. These new recruitment materials were introduced to the residents in the Treatment Hall (N = 727) and were compared to the Usual Care, Control Hall (N = 706) which received the recruitment materials normally provided to residents as well as to three Historical Halls separately and combined which had used the Usual Care recruitment materials in the past. The Treatment Hall percentage attendance was significantly superior (0.001 marketing literature expectations. The projections for campus-wide attendance for residence hall students were between 207 and 243 participants and for nationwide attendance, 36,900 +/- 8,185. The results suggest that the SMT and public health methods used are helpful in developing recruitment strategies and are an important initial step of the SAM and that a "minimal intervention" recruitment strategy is a cost-effective approach that can have a dramatic impact.

  6. Modeling of X-ray Images and Energy Spectra Produced by Stepping Lightning Leaders

    Science.gov (United States)

    Xu, Wei; Marshall, Robert A.; Celestin, Sebastien; Pasko, Victor P.

    2017-11-01

    Recent ground-based measurements at the International Center for Lightning Research and Testing (ICLRT) have greatly improved our knowledge of the energetics, fluence, and evolution of X-ray emissions during natural cloud-to-ground (CG) and rocket-triggered lightning flashes. In this paper, using Monte Carlo simulations and the response matrix of unshielded detectors in the Thunderstorm Energetic Radiation Array (TERA), we calculate the energy spectra of X-rays as would be detected by TERA and directly compare with the observational data during event MSE 10-01. The good agreement obtained between TERA measurements and theoretical calculations supports the mechanism of X-ray production by thermal runaway electrons during the negative corona flash stage of stepping lightning leaders. Modeling results also suggest that measurements of X-ray bursts can be used to estimate the approximate range of potential drop of lightning leaders. Moreover, the X-ray images produced during the leader stepping process in natural negative CG discharges, including both the evolution and morphological features, are theoretically quantified. We show that the compact emission pattern as recently observed in X-ray images is likely produced by X-rays originating from the source region, and the diffuse emission pattern can be explained by the Compton scattering effects.

  7. Survey of time preference, delay discounting models

    Directory of Open Access Journals (Sweden)

    John R. Doyle

    2013-03-01

    Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade

  8. Allergy-inducing nickel concentration is lowered by lipopolysaccharide at both the sensitization and elicitation steps in a murine model.

    Science.gov (United States)

    Kinbara, M; Sato, N; Kuroishi, T; Takano-Yamamoto, T; Sugawara, S; Endo, Y

    2011-02-01

    Nickel (Ni) is the major cause of contact allergy. We previously found that lipopolysaccharide (LPS, a cell-surface component of gram-negative bacteria) markedly promotes Ni allergy in a murine model. Establishing the minimum concentration or amount of Ni needed to induce allergic responses may help us to prevent or reduce such responses. Using the above murine model, we examined the influence of LPS on the minimum allergy-inducing concentrations of Ni (Ni-MAICs) at the sensitization step and at the elicitation step. BALB/c mice were sensitized by intraperitoneal injection of a mixture containing various concentrations of LPS and NiCl(2). Ten days later, their ear pinnas were challenged intradermally with a mixture containing various concentrations of LPS and NiCl(2), and ear swelling was measured. Without LPS, the Ni-MAICs at the sensitization and elicitation steps were around 1×10(-2) mol L(-1) and 1×10(-5) mol L(-1) , respectively. Sensitization with NiCl(2) + LPS did not alter the value at elicitation. Surprisingly, LPS markedly reduced these Ni-MAICs (to around 1×10(-6) molL(-1) at sensitization, with 25 μg mL(-1) LPS, and 1×10(-12) mol L(-1) at elicitation, with 0·5 μg mL(-1) LPS). The effect of LPS depended on its concentration and the timing of its injection. Our findings suggest that: (i) Ni-MAIC is higher at sensitization than at elicitation; (ii) once sensitization is established, Ni allergy can easily be induced by a low concentration of Ni; and (iii) a bacterial milieu or infection may greatly facilitate the establishment and elicitation of Ni allergy. © 2010 The Authors. BJD © 2010 British Association of Dermatologists.

  9. Long-Time Plasma Membrane Imaging Based on a Two-Step Synergistic Cell Surface Modification Strategy.

    Science.gov (United States)

    Jia, Hao-Ran; Wang, Hong-Yin; Yu, Zhi-Wu; Chen, Zhan; Wu, Fu-Gen

    2016-03-16

    Long-time stable plasma membrane imaging is difficult due to the fast cellular internalization of fluorescent dyes and the quick detachment of the dyes from the membrane. In this study, we developed a two-step synergistic cell surface modification and labeling strategy to realize long-time plasma membrane imaging. Initially, a multisite plasma membrane anchoring reagent, glycol chitosan-10% PEG2000 cholesterol-10% biotin (abbreviated as "GC-Chol-Biotin"), was incubated with cells to modify the plasma membranes with biotin groups with the assistance of the membrane anchoring ability of cholesterol moieties. Fluorescein isothiocyanate (FITC)-conjugated avidin was then introduced to achieve the fluorescence-labeled plasma membranes based on the supramolecular recognition between biotin and avidin. This strategy achieved stable plasma membrane imaging for up to 8 h without substantial internalization of the dyes, and avoided the quick fluorescence loss caused by the detachment of dyes from plasma membranes. We have also demonstrated that the imaging performance of our staining strategy far surpassed that of current commercial plasma membrane imaging reagents such as DiD and CellMask. Furthermore, the photodynamic damage of plasma membranes caused by a photosensitizer, Chlorin e6 (Ce6), was tracked in real time for 5 h during continuous laser irradiation. Plasma membrane behaviors including cell shrinkage, membrane blebbing, and plasma membrane vesiculation could be dynamically recorded. Therefore, the imaging strategy developed in this work may provide a novel platform to investigate plasma membrane behaviors over a relatively long time period.

  10. Compact Two-step Laser Time-of-Flight Mass Spectrometer for in Situ Analyses of Aromatic Organics on Planetary Missions

    Science.gov (United States)

    Getty, Stephanie; Brickerhoff, William; Cornish, Timothy; Ecelberger, Scott; Floyd, Melissa

    2012-01-01

    RATIONALE A miniature time-of-flight mass spectrometer has been adapted to demonstrate two-step laser desorption-ionization (LOI) in a compact instrument package for enhanced organics detection. Two-step LDI decouples the desorption and ionization processes, relative to traditional laser ionization-desorption, in order to produce low-fragmentation conditions for complex organic analytes. Tuning UV ionization laser energy allowed control ofthe degree of fragmentation, which may enable better identification of constituent species. METHODS A reflectron time-of-flight mass spectrometer prototype measuring 20 cm in length was adapted to a two-laser configuration, with IR (1064 nm) desorption followed by UV (266 nm) postionization. A relatively low ion extraction voltage of 5 kV was applied at the sample inlet. Instrument capabilities and performance were demonstrated with analysis of a model polycyclic aromatic hydrocarbon, representing a class of compounds important to the fields of Earth and planetary science. RESULTS L2MS analysis of a model PAH standard, pyrene, has been demonstrated, including parent mass identification and the onset o(tunable fragmentation as a function of ionizing laser energy. Mass resolution m/llm = 380 at full width at half-maximum was achieved which is notable for gas-phase ionization of desorbed neutrals in a highly-compact mass analyzer. CONCLUSIONS Achieving two-step laser mass spectrometry (L2MS) in a highly-miniature instrument enables a powerful approach to the detection and characterization of aromatic organics in remote terrestrial and planetary applications. Tunable detection of parent and fragment ions with high mass resolution, diagnostic of molecular structure, is possible on such a compact L2MS instrument. Selectivity of L2MS against low-mass inorganic salt interferences is a key advantage when working with unprocessed, natural samples, and a mechanism for the observed selectivity is presented.

  11. Analytic observations for the d=1+ 1 bridge site (or single-step) deposition model

    International Nuclear Information System (INIS)

    Evans, J.W.; Kang, H.C.

    1991-01-01

    Some exact results for a reversible version of the d=1+1 bridge site (or single-step) deposition model are presented. Exact steady-state properties are determined directly for finite systems with various mean slopes. These show explicitly how the asymptotic growth velocity and fluctuations are quenched as the slope approaches its maximum allowed value. Next, exact hierarchial equations for the dynamics are presented. For the special case of ''equilibrium growth,'' these are analyzed exactly at the pair-correlation level directly for an infinite system. This provided further insight into asymptotic scaling behavior. Finally, the above hierarchy is compared with one generated from a discrete form of the Kardar--Parisi--Zhang equations. Some differences are described

  12. UOE Pipe Numerical Model: Manufacturing Process And Von Mises Residual Stresses Resulted After Each Technological Step

    Science.gov (United States)

    Delistoian, Dmitri; Chirchor, Mihael

    2017-12-01

    Fluid transportation from production areas to final customer is effectuated by pipelines. For oil and gas industry, pipeline safety and reliability represents a priority. From this reason, pipe quality guarantee directly influence pipeline designed life, but first of all protects environment. A significant number of longitudinally welded pipes, for onshore/offshore pipelines, are manufactured by UOE method. This method is based on cold forming. In present study, using finite element method is modeled UOE pipe manufacturing process and is obtained von Mises stresses for each step. Numerical simulation is performed for L415 MB (X60) steel plate with 7,9 mm thickness, length 30 mm and width 1250mm, as result it is obtained a DN 400 pipe.

  13. A Three Step B2B Sales Model Based on Satisfaction Judgments

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe

    2015-01-01

    . The insights produces can be applied for selling companies to craft close collaborative customer relationships in a systematic a d efficient way. The process of building customer relationships will be guided through actions that yields higher satisfaction judgments leading to loyal customers and finally...... companies’ perspective. The buying center members applied satisfaction dimension when forming satisfaction judgments. Moreover, the focus and importance of the identified satisfaction dimensions fluctuated pending on the phase of the buying process. Based on the findings a three step sales model is proposed...... comprising of 1. identification of the satisfaction dimensions the buying center members apply in the buying process. 2. identification of the fluctuation in importance of the satisfaction dimensions and finally 3. identification of the degree of expectations’ adjacent to the identified satisfaction...

  14. A Three Step B2B Sales Model Based on Satisfaction Judgments

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe

    2015-01-01

    . The insights produces can be applied for selling companies to craft close collaborative customer relationships in a systematic ad efficient way. The process of building customer relationships will be guided through actions that yields higher satisfaction judgments leading to loyal customers and finally...... companies‘ perspective. The buying center members applied satisfaction dimension when forming satisfaction judgments. Moreover, the focus and importance of the identified satisfaction dimensions fluctuated pending on the phase of the buying process. Based on the findings a three step sales model is proposed...... comprising of 1. Identification of the satisfaction dimensions the buying center members apply in the buying process. 2. Identification of the fluctuation in importance of the satisfaction dimensions and finally 3. Identification of the degree of expectations‘ adjacent to the identified satisfaction...

  15. A model of hydrogen impact induced chemical erosion of carbon based on elementary reaction steps

    International Nuclear Information System (INIS)

    Wittmann, M.; Kueppers, J.

    1996-01-01

    Based on the elementary reaction steps for chemical erosion of carbon by hydrogen a model is developed which allows to calculate the amount of carbon erosion at a hydrogenated carbon surface under the impact of hydrogen ions and neutrals. Hydrogen ion and neutral flux energy distributions prevailing at target plates in the ASDEX upgrade experiment are chosen in the present calculation. The range of hydrogen particles in the target plates is calculated using TRIDYN code. Based upon the TRIDYN results the extent of the erosion reaction as a function of depth is estimated. The results show that both, target temperature and impinging particle flux energy distribution, determine the hydrogen flux density dependent erosion yield and the location of the erosion below the surface. (orig.)

  16. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian Søndergaard

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini-world...... are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  17. Integration of FULLSWOF2D and PeanoClaw: Adaptivity and Local Time-Stepping for Complex Overland Flows

    KAUST Repository

    Unterweger, K.

    2015-01-01

    © Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.

  18. Prevention of Post-herpetic Neuralgia from Dream to Reality: A Ten-step Model.

    Science.gov (United States)

    Makharita, Mohamed Younis

    2017-02-01

    Herpes zoster (HZ) is a painful, blistering skin eruption in a dermatomal distribution caused by reactivation of a latent varicella zoster virus in the dorsal root ganglia (DRG). Post-herpetic neuralgia (PHN) is the most common complication of acute herpes zoster (AHZ).Severe prodrome, greater acute pain and dermatomal injury, and the density of the eruption are the risk factors and predictors for developing PHN. PHN has a substantial effect on the quality of life; many patients develop severe physical, occupational, social, and psychosocial disabilities as a result of the unceasing pain. The long-term suffering and the limited efficacy of the currently available medications can lead to drug dependency, hopelessness, depression, and even suicide. Family and society are also affected regarding cost and lost productivity. The pathophysiology of PHN remains unclear. Viral reactivation in the dorsal root ganglion and its spread through the affected nerve result in severe ganglionitis and neuritis, which induce a profound sympathetic stimulation and vasoconstriction of the endoneural arterioles, which decreases the blood flow in the intraneural capillary bed resulting in nerve ischemia. Our rationale is based on previous studies which have postulated that the early interventions could reduce repetitive painful stimuli and prevent vasospasm of the endoneural arterioles during the acute phase of HZ. Hence, they might attenuate the central sensitization, prevent the ischemic nerve damage, and finally account for PHN prevention.The author introduces a new Ten-step Model for the prevention of PHN. The idea of this newly suggested approach is to increase the awareness of the health care team and the community about the nature of HZ and its complications, especially in the high-risk groups. Besides, it emphasizes the importance of the prompt antiviral therapy and the early sympathetic blockades for preventing PHN. Key words: Acute herpes zoster, prevention, post

  19. Objective assessment of physical activity and sedentary behaviour in knee osteoarthritis patients - beyond daily steps and total sedentary time.

    Science.gov (United States)

    Sliepen, Maik; Mauricio, Elsa; Lipperts, Matthijs; Grimm, Bernd; Rosenbaum, Dieter

    2018-02-23

    Knee osteoarthritis patients may become physically inactive due to pain and functional limitations. Whether physical activity exerts a protective or harmful effect depends on the frequency, intensity, time and type (F.I.T.T.). The F.I.T.T. dimensions should therefore be assessed during daily life, which so far has hardly been feasible. Furthermore, physical activity should be assessed within subgroups of patients, as they might experience different activity limitations. Therefore, this study aimed to objectively describe physical activity, by assessing the F.I.T.T. dimensions, and sedentary behaviour of knee osteoarthritis patients during daily life. An additional goal was to determine whether activity events, based on different types and durations of physical activity, were able to discriminate between subgroups of KOA patients based on risk factors. Clinically diagnosed knee osteoarthritis patients (according to American College of Rheumatology criteria) were monitored for 1 week with a tri-axial accelerometer. Furthermore, they performed three functional tests and completed the Knee Osteoarthritis Outcome Score. Physical activity levels were described for knee osteoarthritis patients and compared between subgroups. Sixty-one patients performed 7303 mean level steps, 319 ascending and 312 descending steps and 601 bicycle crank revolutions per day. Most waking hours were spent sedentary (61%), with 4.6 bouts of long duration (> 30 min). Specific events, particularly ascending and descending stairs/slopes, brief walking and sedentary bouts and prolonged walking bouts, varied between subgroups. From this sample of KOA patients, the most common form of activity was level walking, although cycling and stair climbing activities occurred frequently, highlighting the relevance of distinguishing between these types of PA. The total active time encompassed a small portion of their waking hours, as they spent most of their time sedentary, which was exacerbated by

  20. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...... developed inner-outer factorization technique. Compared to the other analogous counterparts, the proposed method shows to provide more accurate results in terms of time weighted norms, when applied to different practical examples. The results are further illustrated by a numerical example....

  1. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  2. Alcoholics Anonymous and twelve-step recovery: a model based on social and cognitive neuroscience.

    Science.gov (United States)

    Galanter, Marc

    2014-01-01

    In the course of achieving abstinence from alcohol, longstanding members of Alcoholics Anonymous (AA) typically experience a change in their addiction-related attitudes and behaviors. These changes are reflective of physiologically grounded mechanisms which can be investigated within the disciplines of social and cognitive neuroscience. This article is designed to examine recent findings associated with these disciplines that may shed light on the mechanisms underlying this change. Literature review and hypothesis development. Pertinent aspects of the neural impact of drugs of abuse are summarized. After this, research regarding specific brain sites, elucidated primarily by imaging techniques, is reviewed relative to the following: Mirroring and mentalizing are described in relation to experimentally modeled studies on empathy and mutuality, which may parallel the experiences of social interaction and influence on AA members. Integration and retrieval of memories acquired in a setting like AA are described, and are related to studies on storytelling, models of self-schema development, and value formation. A model for ascription to a Higher Power is presented. The phenomena associated with AA reflect greater complexity than the empirical studies on which this article is based, and certainly require further elucidation. Despite this substantial limitation in currently available findings, there is heuristic value in considering the relationship between the brain-based and clinical phenomena described here. There are opportunities for the study of neuroscientific correlates of Twelve-Step-based recovery, and these can potentially enhance our understanding of related clinical phenomena. © American Academy of Addiction Psychiatry.

  3. Heavy mesons in a simple quark-confining two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.; Kaushal, R.S.

    1980-10-01

    We study the mass spectra and decay widths of upsilon resonances in a simple quark-confining, analytically solvable, two-step potential model used earlier to study the charmonium system and even the light mesons like π, rho, K,... etc. Results are found to be in good agreement with experiments and also with the values predicted by others. We also calculate within our model, the masses of the lowest-lying bottom mesons which we denote by B(π) or B, B(rho) or Bsup(*), B(K) or Bsub(s), B(Ksup(*)) or Bsub(s)sup(*) and B(psi) or Bsub(c); showing that these agree well with the other theoretical predictions. In this way we put the BB-bar threshold at 10.242 GeV, which means in our model the first three radial upsilon excitations, viz. Y(9.4345), Y'(9.9930) and Y''(10.1988) are stable with respect to the Zweig allowed decay Ysup(|||...) → B-barB. (author)

  4. An Iterative Ensemble Kalman Filter with One-Step-Ahead Smoothing for State-Parameters Estimation of Contaminant Transport Models

    KAUST Repository

    Gharamti, M. E.

    2015-05-11

    The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model

  5. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  6. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  7. Discrete-time rewards model-checked

    NARCIS (Netherlands)

    Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.

    2003-01-01

    This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and

  8. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  9. Timed Model Checking of Security Protocols

    NARCIS (Netherlands)

    Corin, R.J.; Etalle, Sandro; Hartel, Pieter H.; Mader, Angelika H.

    We propose a method for engineering security protocols that are aware of timing aspects. We study a simplified version of the well-known Needham Schroeder protocol and the complete Yahalom protocol. Timing information allows the study of different attack scenarios. We illustrate the attacks by model

  10. Stepping Stone Mobility.

    OpenAIRE

    Jovanovic, B.; Nyarko, Y.

    1996-01-01

    People at the top of an occupational ladder earn more partly because they have spent time on lower rungs, where they have learned something. But what precisely do they learn? There are two contrasting views: First, the Bandit model assumes that people are different, that experience reveals their characteristics, and that consequently an occupational switch can result. Second, in our Stepping Stone model, experience raises a worker's productivity on a given task and the acquired skill can in p...

  11. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  12. Dance-the-Music: an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Science.gov (United States)

    Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc

    2012-12-01

    In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.

  13. Modeling preference time in middle distance triathlons

    OpenAIRE

    Fister, Iztok; Iglesias, Andres; Deb, Suash; Fister, Dušan; Fister Jr, Iztok

    2017-01-01

    Modeling preference time in triathlons means predicting the intermediate times of particular sports disciplines by a given overall finish time in a specific triathlon course for the athlete with the known personal best result. This is a hard task for athletes and sport trainers due to a lot of different factors that need to be taken into account, e.g., athlete's abilities, health, mental preparations and even their current sports form. So far, this process was calculated manually without any ...

  14. The analogue method for precipitation prediction: finding better analogue situations at a sub-daily time step

    Science.gov (United States)

    Horton, Pascal; Obled, Charles; Jaboyedoff, Michel

    2017-07-01

    Analogue methods (AMs) predict local weather variables (predictands) such as precipitation by means of a statistical relationship with predictors at a synoptic scale. The analogy is generally assessed on gradients of geopotential heights first to sample days with a similar atmospheric circulation. Other predictors such as moisture variables can also be added in a successive level of analogy. The search for candidate situations similar to a given target day is usually undertaken by comparing the state of the atmosphere at fixed hours of the day for both the target day and the candidate analogues. This is a consequence of using standard daily precipitation time series, which are available over longer periods than sub-daily data. However, it is unlikely for the best analogy to occur at the exact same hour for the target and candidate situations. A better analogue situation may be found with a time shift of several hours since a better fit can occur at different times of the day. In order to assess the potential for finding better analogues at a different hour, a moving time window (MTW) has been introduced. The MTW resulted in a better analogy in terms of the atmospheric circulation and showed improved values of the analogy criterion on the entire distribution of the extracted analogue dates. The improvement was found to increase with the analogue rank due to an accumulation of better analogues in the selection. A seasonal effect has also been identified, with larger improvements shown in winter than in summer. This may be attributed to stronger diurnal cycles in summer that favour predictors taken at the same hour for the target and analogue days. The impact of the MTW on the precipitation prediction skill has been assessed by means of a sub-daily precipitation series transformed into moving 24 h totals at 12, 6, and 3 h time steps. The prediction skill was improved by the MTW, as was the reliability of the prediction. Moreover, the improvements were greater for days

  15. Discrete-time modelling of musical instruments

    International Nuclear Information System (INIS)

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  16. 2-D edge modelling: convergence and results for next step devices in the high recycling regime

    International Nuclear Information System (INIS)

    Pacher, H.D.; D'haeseleer, W.D.; Pacher, G.W.

    1992-01-01

    In the present work, we apply the Braams B-2 code to a next step device, with particular emphasis on convergence of the solutions and on the scaling of the peak power load per unit area on the divertor plate, f p , on the electron temperature T e,p at the point of peak power load. A double null geometry is chosen (ITER as defined, 22 MA, 6 m). The input power P to one outer divertor is 0.4 of the total power to the SOL and 0.05 of the fusion power, and f p is given without safety and peaking factors. Recycling is treated by an analytical model including atoms and molecules. The model is appropriate for the high recycling regime down to T e,p ∼5-10 eV as long as impurity radiation can be neglected, but not beyond since sideways neutral motion is not included. DT ionization and radiation losses are included. Only D-T ions are treated, but collision frequencies are corrected for impurities. Radial transport coefficients are uniform in space, with χ e =2 m 2 /s and D=χ i =χ e /3=const, for most of the cases but Bohm-like scaling is also investigated. (author) 10 refs., 7 figs

  17. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  18. Modeling nonlinear time-dependent treatment effects: an application of the generalized time-varying effect model (TVEM).

    Science.gov (United States)

    Shiyko, Mariya P; Burkhalter, Jack; Li, Runze; Park, Bernard J

    2014-10-01

    The goal of this article is to introduce to social and behavioral scientists the generalized time-varying effect model (TVEM), a semiparametric approach for investigating time-varying effects of a treatment. The method is best suited for data collected intensively over time (e.g., experience sampling or ecological momentary assessments) and addresses questions pertaining to effects of treatment changing dynamically with time. Thus, of interest is the description of timing, magnitude, and (nonlinear) patterns of the effect. Our presentation focuses on practical aspects of the model. A step-by-step demonstration is presented in the context of an empirical study designed to evaluate effects of surgical treatment on quality of life among early stage lung cancer patients during posthospitalization recovery (N = 59; 61% female, M age = 66.1 years). Frequency and level of distress associated with physical symptoms were assessed twice daily over a 2-week period, providing a total of 1,544 momentary assessments. Traditional analyses (analysis of covariance [ANCOVA], repeated-measures ANCOVA, and multilevel modeling) yielded findings of no group differences. In contrast, generalized TVEM identified a pattern of the effect that varied in time and magnitude. Group differences manifested after Day 4. Generalized TVEM is a flexible statistical approach that offers insight into the complexity of treatment effects and allows modeling of nonnormal outcomes. The practical demonstration, shared syntax, and availability of a free set of macros aim to encourage researchers to apply TVEM to complex data and stimulate important scientific discoveries. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    Science.gov (United States)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  20. The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input

    Science.gov (United States)

    Senner, Jill E.; Baud, Matthew R.

    2017-01-01

    An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…

  1. Multi-Step Usage of in Vivo Models During Rational Drug Design and Discovery

    Directory of Open Access Journals (Sweden)

    Charles H. Williams

    2011-04-01

    Full Text Available In this article we propose a systematic development method for rational drug design while reviewing paradigms in industry, emerging techniques and technologies in the field. Although the process of drug development today has been accelerated by emergence of computational methodologies, it is a herculean challenge requiring exorbitant resources; and often fails to yield clinically viable results. The current paradigm of target based drug design is often misguided and tends to yield compounds that have poor absorption, distribution, metabolism, and excretion, toxicology (ADMET properties. Therefore, an in vivo organism based approach allowing for a multidisciplinary inquiry into potent and selective molecules is an excellent place to begin rational drug design. We will review how organisms like the zebrafish and Caenorhabditis elegans can not only be starting points, but can be used at various steps of the drug development process from target identification to pre-clinical trial models. This systems biology based approach paired with the power of computational biology; genetics and developmental biology provide a methodological framework to avoid the pitfalls of traditional target based drug design.

  2. A Two-Step Hybrid Approach for Modeling the Nonlinear Dynamic Response of Piezoelectric Energy Harvesters

    Directory of Open Access Journals (Sweden)

    Claudio Maruccio

    2018-01-01

    Full Text Available An effective hybrid computational framework is described here in order to assess the nonlinear dynamic response of piezoelectric energy harvesting devices. The proposed strategy basically consists of two steps. First, fully coupled multiphysics finite element (FE analyses are performed to evaluate the nonlinear static response of the device. An enhanced reduced-order model is then derived, where the global dynamic response is formulated in the state-space using lumped coefficients enriched with the information derived from the FE simulations. The electromechanical response of piezoelectric beams under forced vibrations is studied by means of the proposed approach, which is also validated by comparing numerical predictions with some experimental results. Such numerical and experimental investigations have been carried out with the main aim of studying the influence of material and geometrical parameters on the global nonlinear response. The advantage of the presented approach is that the overall computational and experimental efforts are significantly reduced while preserving a satisfactory accuracy in the assessment of the global behavior.

  3. The Palliser Rockslide, Canadian Rocky Mountains: Characterization and modeling of a stepped failure surface

    Science.gov (United States)

    Sturzenegger, M.; Stead, D.

    2012-02-01

    This paper presents the results of an investigation of the prehistoric Palliser Rockslide, Rocky Mountains, Canada. Conventional aerial photograph interpretation and field mapping are complemented by terrestrial digital photogrammetry. These techniques allow quantification of the rockslide debris volume and reconstruction of the pre-slide topography. It has been estimated that the volume of rock involved in the most recent large rockslide is 8 Mm 3. Terrestrial digital photogrammetry is used in the characterization of the failure surface morphology, which is subdivided into four types of step-path geometry comprising both pre-existing discontinuities and intact rock fractures. Incorporation of these data into various rock slope stability numerical modeling methods highlights a complex failure mechanism, which includes sliding along a large scale curved failure surface, intact rock bridge fracturing and lateral confinement. A preliminary quantification of the contribution of intact rock bridges to the shear strength of the failure surface is presented in terms of the apparent cohesion, apparent tensile strength and cumulative length of the intact rock segments.

  4. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  5. Quantitative one-step real-time RT-PCR for the fast detection of the four genotypes of PPRV.

    Science.gov (United States)

    Kwiatek, Olivier; Keita, Djénéba; Gil, Patricia; Fernández-Pinero, Jovita; Jimenez Clavero, Miguel Angel; Albina, Emmanuel; Libeau, Genevieve

    2010-05-01

    A one-step real-time Taqman RT-PCR assay (RRT-PCR) for peste des petits ruminants virus (PPRV) was developed to detect the four lineages of PPRV by targeting the nucleoprotein (N) gene of the virus. This new assay was compared to a conventional RT-PCR on reference strains and field materials. Quantitation was performed against a standard based on a synthetic transcript of the NPPR gene for which a minimum of 32 copies per reaction were detected with a corresponding C(t) value of 39. Depending on the lineage involved, the detection limit of RRT-PCR was decreased by one to three log copies relative to the conventional method. The lower stringency occurred with lineage III because of minor nucleotide mismatches within the probe region. The assay did not detect phylogenetically or symptomatically related viruses of ruminants (such as rinderpest, bluetongue, and bovine viral diarrhea viruses). However, it was capable of detecting 20% more positive field samples with low viral RNA loads compared to the conventional PCR method. When compared on a proficiency panel to the method developed by Bao et al. (2008), the sensitivity of the in-house assay was slightly improved on lineage II. It proved significantly faster to perform and hence better adapted for monitoring large numbers of at risk or diseased animals. Copyright 2010 Elsevier B.V. All rights reserved.

  6. A two-step real-time PCR assay for quantitation and genotyping of human parvovirus 4.

    Science.gov (United States)

    Väisänen, E; Lahtinen, A; Eis-Hübinger, A M; Lappalainen, M; Hedman, K; Söderlund-Venermo, M

    2014-01-01

    Human parvovirus 4 (PARV4) of the family Parvoviridae was discovered in a plasma sample of a patient with an undiagnosed acute infection in 2005. Currently, three PARV4 genotypes have been identified, however, with an unknown clinical significance. Interestingly, these genotypes seem to differ in epidemiology. In Northern Europe, USA and Asia, genotypes 1 and 2 have been found to occur mainly in persons with a history of injecting drug use or other parenteral exposure. In contrast, genotype 3 appears to be endemic in sub-Saharan Africa, where it infects children and adults without such risk behaviour. In this study, a novel straightforward and cost-efficient molecular assay for both quantitation and genotyping of PARV4 DNA was developed. The two-step method first applies a single-probe pan-PARV4 qPCR for screening and quantitation of this relatively rare virus, and subsequently, only the positive samples undergo a real-time PCR-based multi-probe genotyping. The new qPCR-GT method is highly sensitive and specific regardless of the genotype, and thus being suitable for studying the clinical impact and occurrence of the different PARV4 genotypes. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Potentials and Limitations of Real-Time Elastography for Prostate Cancer Detection: A Whole-Mount Step Section Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Junker

    2012-01-01

    Full Text Available Objectives. To evaluate prostate cancer (PCa detection rates of real-time elastography (RTE in dependence of tumor size, tumor volume, localization and histological type. Materials and Methods. Thirdy-nine patients with biopsy proven PCa underwent RTE before radical prostatectomy (RPE to assess prostate tissue elasticity, and hard lesions were considered suspicious for PCa. After RPE, the prostates were prepared as whole-mount step sections and were compared with imaging findings for analyzing PCa detection rates. Results. RTE detected 6/62 cancer lesions with a maximum diameter of 0–5 mm (9.7%, 10/37 with a maximum diameter of 6–10 mm (27%, 24/34 with a maximum diameter of 11–20 20 mm (70.6%, 14/14 with a maximum diameter of >20 mm (100% and 40/48 with a volume ≥0.2 cm3 (83.3%. Regarding cancer lesions with a volume ≥ 0.2 cm³ there was a significant difference in PCa detection rates between Gleason scores with predominant Gleason pattern 3 compared to those with predominant Gleason pattern 4 or 5 (75% versus 100%; P=0.028. Conclusions. RTE is able to detect PCa of significant tumor volume and of predominant Gleason pattern 4 or 5 with high confidence, but is of limited value in the detection of small cancer lesions.

  8. Identification of Dobrava, Hantaan, Seoul, and Puumala viruses by one-step real-time RT-PCR.

    Science.gov (United States)

    Aitichou, Mohamed; Saleh, Sharron S; McElroy, Anita K; Schmaljohn, C; Ibrahim, M Sofi

    2005-03-01

    We developed four assays for specifically identifying Dobrava (DOB), Hantaan (HTN), Puumala (PUU), and Seoul (SEO) viruses. The assays are based on the real-time one-step reverse transcriptase polymerase chain reaction (RT-PCR) with the small segment used as the target sequence. The detection limits of DOB, HTN, PUU, and SEO assays were 25, 25, 25, and 12.5 plaque-forming units, respectively. The assays were evaluated in blinded experiments, each with 100 samples that contained Andes, Black Creek Canal, Crimean-Congo hemorrhagic fever, Rift Valley fever and Sin Nombre viruses in addition to DOB, HTN, PUU and SEO viruses. The sensitivity levels of the DOB, HTN, PUU, and SEO assays were 98%, 96%, 92% and 94%, respectively. The specificity of DOB, HTN and SEO assays was 100% and the specificity of the PUU assay was 98%. Because of the high levels of sensitivity, specificity, and reproducibility, we believe that these assays can be useful for diagnosing and differentiating these four Old-World hantaviruses.

  9. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    Science.gov (United States)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to

  10. Time dependent viscous string cloud cosmological models

    Science.gov (United States)

    Tripathy, S. K.; Nayak, S. K.; Sahu, S. K.; Routray, T. R.

    2009-09-01

    Bianchi type-I string cosmological models are studied in Saez-Ballester theory of gravitation when the source for the energy momentum tensor is a viscous string cloud coupled to gravitational field. The bulk viscosity is assumed to vary with time and is related to the scalar expansion. The relationship between the proper energy density ρ and string tension density λ are investigated from two different cosmological models.

  11. On the two steps threshold selection for over-threshold modelling of extreme events

    Science.gov (United States)

    Bernardara, Pietro; Mazas, Franck; Weiss, Jerome; Andreewsky, Marc; Kergadallan, Xavier; Benoit, Michel; Hamm, Luc

    2013-04-01

    The estimation of the probability of occurrence of extreme events is traditionally achieved by fitting a probability distribution on a sample of extreme observations. In particular, the extreme value theory (EVT) states that values exceeding a given threshold converge through a Generalized Pareto Distribution (GPD) if the original sample is composed of independent and identically distributed values. However, the temporal series of sea and ocean variables usually show strong temporal autocorrelation. Traditionally, in order to select independent events for the following statistical analysis, the concept of a physical threshold is introduced: events that excess that threshold are defined as "extreme events". This is the so-called "Peak Over a Threshold (POT)" sampling, widely spread in the literature and currently used for engineering applications among many others. In the past, the threshold for the statistical sampling of extreme values asymptotically convergent toward GPD and the threshold for the physical selection of independent extreme events were confused, as the same threshold was used for both sampling data and to meet the hypothesis of extreme value convergence, leading to some incoherencies. In particular, if the two steps are performed simultaneously, the number of peaks over the threshold can increase but also decrease when the threshold decreases. This is logic in a physical point of view, since the definition of the sample of "extreme events" changes, but is not coherent with the statistical theory. We introduce a two-steps threshold selection for over-threshold modelling, aiming to discriminate (i) a physical threshold for the selection of extreme and independent events, and (ii) a statistical threshold for the optimization of the coherence with the hypothesis of the EVT. The former is a physical events identification procedure (also called "declustering") aiming at selecting independent extreme events. The latter is a purely statistical optimization

  12. An in vitro evaluation of effect of eugenol exposure time on the shear bond strength of two-step and one-step self-etching adhesives to dentin.

    Science.gov (United States)

    Nasreen, Farhat; Guptha, Anila Bandlapalli Sreenivasa; Srinivasan, Raghu; Chandrappa, Mahesh Martur; Bhandary, Shreetha; Junjanna, Pramod

    2014-05-01

    To evaluate the effect of the eugenol exposure time of an eugenol-based provisional restorative material on the shear bond strength of two-step and one-step self-etching adhesives to dentin, at three different time intervals of 24 h, 7 days, and 14 days. Forty extracted human posterior teeth were sectioned mesiodistally to obtain two halves and the resulting 80 halves were randomly assigned into four groups of 20 specimens each (Group-I, -II, -III, and -IV). Cavities of specified dimensions were prepared to expose dentin surface. In Group-I, temporarization was carried out with noneugenol cement (Orafil-G) for 24 h (control group). In Group-II, -III, and -IV, temporarization was carried out with eugenol cement (intermediate restorative material (IRM)) for 24 h, 7 days, and 14 days, respectively. Each group was further divided into two subgroups of 10 teeth each for two-step (Adper SE Plus) and one-step (Adper Easy One) self-etch adhesive systems, respectively. A plastic tube loaded with microhybrid composite resin (Filtek Z-350, 3M) was placed over the dentin surface and light cured. The specimens were subjected to shear stress in universal testing machine. Group-II yielded low shear bond strength values compared with Group-III, -IV, and Group-I, which was statistically significant. The prior use of eugenol containing temporary restorative material reduced the bond strength of self-etch adhesive systems at 24-h period. No reduction in bond strength at 7 or 14 days exposure was observed with either two-step or one-step self-etch adhesive.

  13. Takeover times for a simple model of network infection

    Science.gov (United States)

    Ottino-Löffler, Bertrand; Scott, Jacob G.; Strogatz, Steven H.

    2017-07-01

    We study a stochastic model of infection spreading on a network. At each time step a node is chosen at random, along with one of its neighbors. If the node is infected and the neighbor is susceptible, the neighbor becomes infected. How many time steps T does it take to completely infect a network of N nodes, starting from a single infected node? An analogy to the classic "coupon collector" problem of probability theory reveals that the takeover time T is dominated by extremal behavior, either when there are only a few infected nodes near the start of the process or a few susceptible nodes near the end. We show that for N ≫1 , the takeover time T is distributed as a Gumbel distribution for the star graph, as the convolution of two Gumbel distributions for a complete graph and an Erdős-Rényi random graph, as a normal for a one-dimensional ring and a two-dimensional lattice, and as a family of intermediate skewed distributions for d -dimensional lattices with d ≥3 (these distributions approach the convolution of two Gumbel distributions as d approaches infinity). Connections to evolutionary dynamics, cancer, incubation periods of infectious diseases, first-passage percolation, and other spreading phenomena in biology and physics are discussed.

  14. A real time hyperelastic tissue model.

    Science.gov (United States)

    Zhong, Hualiang; Peters, Terry

    2007-06-01

    Real-time soft tissue modeling has a potential application in medical training, procedure planning and image-guided therapy. This paper characterizes the mechanical properties of organ tissue using a hyperelastic material model, an approach which is then incorporated into a real-time finite element framework. While generalizable, in this paper we use the published mechanical properties of pig liver to characterize an example application. Specifically, we calibrate the parameters of an exponential model, with a least-squares method (LSM) using the assumption that the material is isotropic and incompressible in a uniaxial compression test. From the parameters obtained, the stress-strain curves generated from the LSM are compared to those from the corresponding computational model solved by ABAQUS and also to experimental data, resulting in mean errors of 1.9 and 4.8%, respectively, which are considerably better than those obtained when employing the Neo-Hookean model. We demonstrate our approach through the simulation of a biopsy procedure, employing a tetrahedral mesh representation of human liver generated from a CT image. Using the material properties along with the geometric model, we develop a nonlinear finite element framework to simulate the behaviour of liver during an interventional procedure with a real-time performance achieved through the use of an interpolation approach.

  15. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    Science.gov (United States)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  16. Improving Genetic Evaluation of Litter Size Using a Single-step Model

    DEFF Research Database (Denmark)

    Guo, Xiangyu; Christensen, Ole Fredslund; Ostersen, Tage

    A recently developed single-step method allows genetic evaluation based on information from phenotypes, pedigree and markers simultaneously. This paper compared reliabilities of predicted breeding values obtained from single-step method and the traditional pedigree-based method for two litter size...... traits, total number of piglets born (TNB), and litter size at five days after birth (Ls 5) in Danish Landrace and Yorkshire pigs. The results showed that the single-step method combining phenotypic and genotypic information provided more accurate predictions than the pedigree-based method, not only...

  17. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the

  18. Stepping motor controller

    Science.gov (United States)

    Bourret, Steven C.; Swansen, James E.

    1984-01-01

    A stepping motor is microprocessingly controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  19. Space-time modeling of timber prices

    Science.gov (United States)

    Mo Zhou; Joseph Buongriorno

    2006-01-01

    A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...

  20. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  1. First step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 code

    International Nuclear Information System (INIS)

    Dominguez, L.; Camargo, C.T.M.

    1984-09-01

    The first step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 computer code is presented. This step consists of the introduction of a simplified model for simulating the steam generator. This model is the GEVAP computer code, integrant part of LOOP code, which simulates the primary coolant circuit of PWR nuclear power plants during transients. The ALMOD3 computer code has a model for the steam generator, called UTSG, which is very detailed. This model has spatial dependence, correlations for 2-phase flow, distinguished correlations for different heat transfer process. The GEVAP model has thermal equilibrium between phases (gaseous and liquid homogeneous mixture), no spatial dependence and uses only one generalized correlation to treat several heat transfer processes. (Author) [pt

  2. Multi-step constant-current charging method for electric vehicle, valve-regulated, lead/acid batteries during night time for load-levelling

    Energy Technology Data Exchange (ETDEWEB)

    Ikeya, Tomohiko; Mita, Yuichi; Ishihara, Kaoru [Central Research Inst. of Electric Power Industry, Tokyo (Japan); Sawada, Nobuyuki [Hokkaido Electric Power, Sapporo (Japan); Takagi, Sakae; Murakami, Jun-ichi [Tohoku Electric Power, Sendai (Japan); Kobayashi, Kazuyuki [Tokyo Electric Power, Yokohama (Japan); Sakabe, Tetsuya [Chubu Electric Power, Nagoya (Japan); Kousaka, Eiichi [Hokuriku Electric Power, Toyama (Japan); Yoshioka, Haruki [The Kansai Electric Power, Osaka (Japan); Kato, Satoru [The Chugoku Electric Power, Hiroshima (Japan); Yamashita, Masanori [Shikoku Research Inst., Takamatsu (Japan); Narisoko, Hayato [The Okinawa Electric Power, Naha (Japan); Nishiyama, Kazuo [The Central Electric Power Council, Tokyo (Japan); Adachi, Kazuyuki [Kyushu Electric Power, Fukuoka (Japan)

    1998-09-01

    For the popularization of electric vehicles (EVs), the conditions for charging EV batteries with available current patterns should allow complete charging in a short time, i.e., less than 5 to 8 h. Therefore, in this study, a new charging condition is investigated for the EV valve-regulated lead/acid battery system, which should allow complete charging of EV battery systems with multi-step constant currents in a much shorter time with longer cycle life and higher energy efficiency compared with two-step constant-current charging. Although a high magnitude of the first current in the two-step constant-current method prolongs cycle life by suppressing the softening of positive active material, too large a charging current magnitude degrades cells due to excess internal evolution of heat. A charging current magnitude of approximately 0.5 C is expected to prolong cycle life further. Three-step charging could also increase the magnitude of charging current in the first step without shortening cycle life. Four-or six-step constant-current methods could shorten the charging time to less than 5 h, as well as yield higher energy efficiency and enhanced cycle life of over 400 cycles compared with two-step charging with the first step current of 0.5 C. Investigation of the degradation mechanism of the batteries revealed that the conditions of multi-step constant-current charging suppressed softening of positive active material and sulfation of negative active material, but, unfortunately, advanced the corrosion of the grids in the positive plates. By adopting improved grids and cooling of the battery system, the multistep constant-current method may enhance the cycle life. (orig.)

  3. A two-step ionospheric modeling algorithm considering the impact of GLONASS pseudo-range inter-channel biases

    Science.gov (United States)

    Zhang, Rui; Yao, Yi-bin; Hu, Yue-ming; Song, Wei-wei

    2017-12-01

    The Global Navigation Satellite System presents a plausible and cost-effective way of computing the total electron content (TEC). But TEC estimated value could be seriously affected by the differential code biases (DCB) of frequency-dependent satellites and receivers. Unlike GPS and other satellite systems, GLONASS adopts a frequency-division multiplexing access mode to distinguish different satellites. This strategy leads to different wavelengths and inter-frequency biases (IFBs) for both pseudo-range and carrier phase observations, whose impacts are rarely considered in ionospheric modeling. We obtained observations from four groups of co-stations to analyze the characteristics of the GLONASS receiver P1P2 pseudo-range IFB with a double-difference method. The results showed that the GLONASS P1P2 pseudo-range IFB remained stable for a period of time and could catch up to several meters, which cannot be absorbed by the receiver DCB during ionospheric modeling. Given the characteristics of the GLONASS P1P2 pseudo-range IFB, we proposed a two-step ionosphere modeling method with the priori IFB information. The experimental analysis showed that the new algorithm can effectively eliminate the adverse effects on ionospheric model and hardware delay parameters estimation in different space environments. During high solar activity period, compared to the traditional GPS + GLONASS modeling algorithm, the absolute average deviation of TEC decreased from 2.17 to 2.07 TECu (TEC unit); simultaneously, the average RMS of GPS satellite DCB decreased from 0.225 to 0.219 ns, and the average deviation of GLONASS satellite DCB decreased from 0.253 to 0.113 ns with a great improvement in over 55%.

  4. Next Step for STEP

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Claire [CTSI; Bremner, Brenda [CTSI

    2013-08-09

    The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.

  5. Dynamical Constants and Time Universals: A First Step toward a Metrical Definition of Ordered and Abnormal Cognition.

    Science.gov (United States)

    Elliott, Mark A; du Bois, Naomi

    2017-01-01

    From the point of view of the cognitive dynamicist the organization of brain circuitry into assemblies defined by their synchrony at particular (and precise) oscillation frequencies is important for the correct correlation of all independent cortical responses to the different aspects of a given complex thought or object. From the point of view of anyone operating complex mechanical systems, i.e., those comprising independent components that are required to interact precisely in time, it follows that the precise timing of such a system is essential - not only essential but measurable, and scalable. It must also be reliable over observations to bring about consistent behavior, whatever that behavior is. The catastrophic consequence of an absence of such precision, for instance that required to govern the interference engine in many automobiles, is indicative of how important timing is for the function of dynamical systems at all levels of operation. The dynamics and temporal considerations combined indicate that it is necessary to consider the operating characteristic of any dynamical, cognitive brain system in terms, superficially at least, of oscillation frequencies. These may, themselves, be forensic of an underlying time-related taxonomy. Currently there are only two sets of relevant and necessarily systematic observations in this field: one of these reports the precise dynamical structure of the perceptual systems engaged in dynamical binding across form and time; the second, derived both empirically from perceptual performance data, as well as obtained from theoretical models, demonstrates a timing taxonomy related to a fundamental operator referred to as the time quantum. In this contribution both sets of theory and observations are reviewed and compared for their predictive consistency. Conclusions about direct comparability are discussed for both theories of cognitive dynamics and time quantum models. Finally, a brief review of some experimental data

  6. Model-based optimization of the primary drying step during freeze-drying

    DEFF Research Database (Denmark)

    Mortier, Séverine Thérèse F.C.; Van Bockstal, Pieter-Jan; Nopens, Ingmar

    2015-01-01

    Since large molecules are considered the key driver for growth of the pharmaceutical industry, the focus of the pharmaceutical industry is shifting from small molecules to biopharmaceuticals: around 50% of the approved biopharmaceuticals are freeze-dried products. Therefore, freeze- drying...... is an important technology to stabilise biopharmaceutical drug products which are unstable in an aqueous solution. However, the freeze-drying process is an energy and time-consuming process. The use of mechanistic modelling to gather process knowledge can assist in optimisation of the process parameters during...... the operation of the freeze-drying process. By applying a dynamic shelf temperature and chamber pressure, which are the only controllable process variables, the processing time can be decreased by a factor 2 to 3....

  7. Improving pain care through implementation of the Stepped Care Model at a multisite community health center

    Directory of Open Access Journals (Sweden)

    Anderson DR

    2016-11-01

    Full Text Available Daren R Anderson,1 Ianita Zlateva,1 Emil N Coman,2 Khushbu Khatri,1 Terrence Tian,1 Robert D Kerns3 1Weitzman Institute, Community Health Center, Inc., Middletown, 2UCONN Health Disparities Institute, University of Connecticut, Farmington, 3VA Connecticut Healthcare System, West Haven, CT, USA Purpose: Treating pain in primary care is challenging. Primary care providers (PCPs receive limited training in pain care and express low confidence in their knowledge and ability to manage pain effectively. Models to improve pain outcomes have been developed, but not formally implemented in safety net practices where pain is particularly common. This study evaluated the impact of implementing the Stepped Care Model for Pain Management (SCM-PM at a large, multisite Federally Qualified Health Center. Methods: The Promoting Action on Research Implementation in Health Services framework guided the implementation of the SCM-PM. The multicomponent intervention included: education on pain care, new protocols for pain assessment and management, implementation of an opioid management dashboard, telehealth consultations, and enhanced onsite specialty resources. Participants included 25 PCPs and their patients with chronic pain (3,357 preintervention and 4,385 postintervention cared for at Community Health Center, Inc. Data were collected from the electronic health record and supplemented by chart reviews. Surveys were administered to PCPs to assess knowledge, attitudes, and confidence. Results: Providers’ pain knowledge scores increased to an average of 11% from baseline; self-rated confidence in ability to manage pain also increased. Use of opioid treatment agreements and urine drug screens increased significantly by 27.3% and 22.6%, respectively. Significant improvements were also noted in documentation of pain, pain treatment, and pain follow-up. Referrals to behavioral health providers for patients with pain increased by 5.96% (P=0.009. There was no

  8. Time-stepping techniques to enable the simulation of bursting behavior in a physiologically realistic computational islet.

    Science.gov (United States)

    Khuvis, Samuel; Gobbert, Matthias K; Peercy, Bradford E

    2015-05-01

    Physiologically realistic simulations of computational islets of beta cells require the long-time solution of several thousands of coupled ordinary differential equations (ODEs), resulting from the combination of several ODEs in each cell and realistic numbers of several hundreds of cells in an islet. For a reliable and accurate solution of complex nonlinear models up to the desired final times on the scale of several bursting periods, an appropriate ODE solver designed for stiff problems is eventually a necessity, since other solvers may not be able to handle the problem or are exceedingly inefficient. But stiff solvers are potentially significantly harder to use, since their algorithms require at least an approximation of the Jacobian matrix. For sophisticated models, systems of several complex ODEs in each cell, it is practically unworkable to differentiate these intricate nonlinear systems analytically and to manually program the resulting Jacobian matrix in computer code. This paper demonstrates that automatic differentiation can be used to obtain code for the Jacobian directly from code for the ODE system, which allows a full accounting for the sophisticated model equations. This technique is also feasible in source-code languages Fortran and C, and the conclusions apply to a wide range of systems of coupled, nonlinear reaction equations. However, when we combine an appropriately supplied Jacobian with slightly modified memory management in the ODE solver, simulations on the realistic scale of one thousand cells in the islet become possible that are several orders of magnitude faster than the original solver in the software Matlab, a language that is particularly user friendly for programming complicated model equations. We use the efficient simulator to analyze electrical bursting and show non-monotonic average burst period between fast and slow cells for increasing coupling strengths. We also find that interestingly, the arrangement of the connected fast

  9. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  10. Modeling and Understanding Time-Evolving Scenarios

    Directory of Open Access Journals (Sweden)

    Riccardo Melen

    2015-08-01

    Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.

  11. The antibiotic resistance arrow of time: efflux pump induction is a general first step in the evolution of mycobacterial drug resistance.

    Science.gov (United States)

    Schmalstieg, Aurelia M; Srivastava, Shashikant; Belkaya, Serkan; Deshpande, Devyani; Meek, Claudia; Leff, Richard; van Oers, Nicolai S C; Gumbo, Tawanda

    2012-09-01

    We hypothesize that low-level efflux pump expression is the first step in the development of high-level drug resistance in mycobacteria. We performed 28-day azithromycin dose-effect and dose-scheduling studies in our hollow-fiber model of disseminated Mycobacterium avium-M. intracellulare complex. Both microbial kill and resistance emergence were most closely linked to the within-macrophage area under the concentration-time curve (AUC)/MIC ratio. Quantitative PCR revealed that subtherapeutic azithromycin exposures over 3 days led to a 56-fold increase in expression of MAV_3306, which encodes a putative ABC transporter, and MAV_1406, which encodes a putative major facilitator superfamily pump, in M. avium. By day 7, a subpopulation of M. avium with low-level resistance was encountered and exhibited the classic inverted U curve versus AUC/MIC ratios. The resistance was abolished by an efflux pump inhibitor. While the maximal microbial kill started to decrease after day 7, a population with high-level azithromycin resistance appeared at day 28. This resistance could not be reversed by efflux pump inhibitors. Orthologs of pumps encoded by MAV_3306 and MAV_1406 were identified in Mycobacterium tuberculosis, Mycobacterium leprae, Mycobacterium marinum, Mycobacterium abscessus, and Mycobacterium ulcerans. All had highly conserved protein secondary structures. We propose that induction of several efflux pumps is the first step in a general pathway to drug resistance that eventually leads to high-level chromosomal-mutation-related resistance in mycobacteria as ordered events in an "antibiotic resistance arrow of time."

  12. Discrete time modelization of human pilot behavior

    Science.gov (United States)

    Cavalli, D.; Soulatges, D.

    1975-01-01

    This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.

  13. Linear Parametric Model Checking of Timed Automata

    DEFF Research Database (Denmark)

    Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle

    2001-01-01

    We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...

  14. Multiple-relaxation-time lattice Boltzmann model for compressible fluids

    International Nuclear Information System (INIS)

    Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun

    2011-01-01

    We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.

  15. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    Directory of Open Access Journals (Sweden)

    Craig Cora L

    2011-06-01

    Full Text Available Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Results Girls had a lower median steps/day (10682 versus 11059 for boys and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys. 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, Discussion Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by promoting excess energy intake. Conclusions In this large nationally representative sample, pedometer-determined steps/day was associated with reduced odds of being obese (but not overweight whereas each parent-reported hour spent watching television between school and dinner increased the odds of both overweight and obesity.

  16. Time-resolved measurements of laser-induced diffusion of CO molecules on stepped Pt(111)-surfaces; Zeitaufgeloeste Untersuchung der laser-induzierten Diffusion von CO-Molekuelen auf gestuften Pt(111)-Oberflaechen

    Energy Technology Data Exchange (ETDEWEB)

    Lawrenz, M.

    2007-10-30

    In the present work the dynamics of CO-molecules on a stepped Pt(111)-surface induced by fs-laser pulses at low temperatures was studied by using laser spectroscopy. In the first part of the work, the laser-induced diffusion for the CO/Pt(111)-system could be demonstrated and modelled successfully for step diffusion. At first, the diffusion of CO-molecules from the step sites to the terrace sites on the surface was traced. The experimentally discovered energy transfer time of 500 fs for this process confirms the assumption of an electronically induced process. In the following it was explained how the experimental results were modelled. A friction coefficient which depends on the electron temperature yields a consistent model, whereas for the understanding of the fluence dependence and time-resolved measurements parallel the same set of parameters was used. Furthermore, the analysis was extended to the CO-terrace diffusion. Small coverages of CO were adsorbed to the terraces and the diffusion was detected as the temporal evolution of the occupation of the step sites acting as traps for the diffusing molecules. The additional performed two-pulse correlation measurements also indicate an electronically induced process. At the substrate temperature of 40 K the cross-correlation - where an energy transfer time of 1.8 ps was extracted - suggests also an electronically induced energy transfer mechanism. Diffusion experiments were performed for different substrate temperatures. (orig.)

  17. Modelling of Patterns in Space and Time

    CERN Document Server

    Murray, James

    1984-01-01

    This volume contains a selection of papers presented at the work­ shop "Modelling of Patterns in Space and Time", organized by the 80nderforschungsbereich 123, "8tochastische Mathematische Modelle", in Heidelberg, July 4-8, 1983. The main aim of this workshop was to bring together physicists, chemists, biologists and mathematicians for an exchange of ideas and results in modelling patterns. Since the mathe­ matical problems arising depend only partially on the particular field of applications the interdisciplinary cooperation proved very useful. The workshop mainly treated phenomena showing spatial structures. The special areas covered were morphogenesis, growth in cell cultures, competition systems, structured populations, chemotaxis, chemical precipitation, space-time oscillations in chemical reactors, patterns in flames and fluids and mathematical methods. The discussions between experimentalists and theoreticians were especially interesting and effective. The editors hope that these proceedings reflect ...

  18. Axial model in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Barcelos-Neto, J.; Farina, C.; Vaidya, A.N.

    1986-12-11

    We study the axial model in a background gravitational field. Using the zeta-function regularization, we obtain explicitly the anomalous divergence of the axial-vector current and the exact generating functional of the theory. We show that, as a consequence of a space-time-dependent metric, all differential equations involved in the theory generalize to their covariantized forms. We also comment on the finite-mass renormalization exhibited by the pseudoscalar field and the form of the fermion propagator.

  19. Direct construction of predictive models for describing growth Salmonella enteritidis in liquid eggs – a one-step approach

    Science.gov (United States)

    The objective of this study was to develop a new approach using a one-step approach to directly construct predictive models for describing the growth of Salmonella Enteritidis (SE) in liquid egg white (LEW) and egg yolk (LEY). A five-strain cocktail of SE, induced to resist rifampicin at 100 mg/L, ...

  20. Simplified Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    Science.gov (United States)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering

  1. Space-time adaptive hierarchical model reduction for parabolic equations.

    Science.gov (United States)

    Perotto, Simona; Zilio, Alessandro

    Surrogate solutions and surrogate models for complex problems in many fields of science and engineering represent an important recent research line towards the construction of the best trade-off between modeling reliability and computational efficiency. Among surrogate models, hierarchical model (HiMod) reduction provides an effective approach for phenomena characterized by a dominant direction in their dynamics. HiMod approach obtains 1D models naturally enhanced by the inclusion of the effect of the transverse dynamics. HiMod reduction couples a finite element approximation along the mainstream with a locally tunable modal representation of the transverse dynamics. In particular, we focus on the pointwise HiMod reduction strategy, where the modal tuning is performed on each finite element node. We formalize the pointwise HiMod approach in an unsteady setting, by resorting to a model discontinuous in time, continuous and hierarchically reduced in space (c[M([Formula: see text])G( s )]-dG( q ) approximation). The selection of the modal distribution and of the space-time discretization is automatically performed via an adaptive procedure based on an a posteriori analysis of the global error. The final outcome of this procedure is a table, named HiMod lookup diagram , that sets the time partition and, for each time interval, the corresponding 1D finite element mesh together with the associated modal distribution. The results of the numerical verification confirm the robustness of the proposed adaptive procedure in terms of accuracy, sensitivity with respect to the goal quantity and the boundary conditions, and the computational saving. Finally, the validation results in the groundwater experimental setting are promising. The extension of the HiMod reduction to an unsteady framework represents a crucial step with a view to practical engineering applications. Moreover, the results of the validation phase confirm that HiMod approximation is a viable approach.

  2. Time series modeling for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Mandl Kenneth D

    2003-01-01

    Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool

  3. Modeling utilization distributions in space and time

    Science.gov (United States)

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  4. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  5. Modelling of subcritical free-surface flow over an inclined backward-facing step in a water channel

    Directory of Open Access Journals (Sweden)

    Šulc Jan

    2012-04-01

    Full Text Available The contribution deals with the experimental and numerical modelling of subcritical turbulent flow in an open channel with an inclined backward-facing step. The step with the inclination angle α = 20° was placed in the water channel of the cross-section 200×200 mm. Experiments were carried out by means of the PIV and LDA measuring techniques. Numerical simulations were executed by means of the commercial software ANSYS CFX 12.0. Numerical results obtained for twoequation models and EARSM turbulence model completed by transport equations for turbulent energy and specific dissipation rate were compared with experimental data. The modelling was concentrated particularly on the development of the flow separation and on the corresponding changes of free surface.

  6. New Reduced Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    Science.gov (United States)

    Molnar, Melissa; Marek, C. John

    2004-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3

  7. Novel Ordered Stepped-Wedge Cluster Trial Designs for Detecting Ebola Vaccine Efficacy Using a Spatially Structured Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Ibrahim Diakite

    2016-08-01

    Full Text Available During the 2014 Ebola virus disease (EVD outbreak, policy-makers were confronted with difficult decisions on how best to test the efficacy of EVD vaccines. On one hand, many were reluctant to withhold a vaccine that might prevent a fatal disease from study participants randomized to a control arm. On the other, regulatory bodies called for rigorous placebo-controlled trials to permit direct measurement of vaccine efficacy prior to approval of the products. A stepped-wedge cluster study (SWCT was proposed as an alternative to a more traditional randomized controlled vaccine trial to address these concerns. Here, we propose novel "ordered stepped-wedge cluster trial" (OSWCT designs to further mitigate tradeoffs between ethical concerns, logistics, and statistical rigor.We constructed a spatially structured mathematical model of the EVD outbreak in Sierra Leone. We used the output of this model to simulate and compare a series of stepped-wedge cluster vaccine studies. Our model reproduced the observed order of first case occurrence within districts of Sierra Leone. Depending on the infection risk within the trial population and the trial start dates, the statistical power to detect a vaccine efficacy of 90% varied from 14% to 32% for standard SWCT, and from 67% to 91% for OSWCTs for an alpha error of 5%. The model's projection of first case occurrence was robust to changes in disease natural history parameters.Ordering clusters in a step-wedge trial based on the cluster's underlying risk of infection as predicted by a spatial model can increase the statistical power of a SWCT. In the event of another hemorrhagic fever outbreak, implementation of our proposed OSWCT designs could improve statistical power when a step-wedge study is desirable based on either ethical concerns or logistical constraints.

  8. A Two-Step Model for Assessing Relative Interest in E-Books Compared to Print

    Science.gov (United States)

    Knowlton, Steven A.

    2016-01-01

    Librarians often wish to know whether readers in a particular discipline favor e-books or print books. Because print circulation and e-book usage statistics are not directly comparable, it can be hard to determine the relative interest of readers in the two types of books. This study demonstrates a two-step method by which librarians can assess…

  9. Multi-step Attack Modelling and Simulation (MsAMS) Framework based on Mobile Ambients

    NARCIS (Netherlands)

    Nunes Leal Franqueira, V.; Lopes, R H C; van Eck, Pascal

    Attackers take advantage of any security breach to penetrate an organisation perimeter and exploit hosts as stepping stones to reach valuable assets, deeper in the network. The exploitation of hosts is possible not only when vulnerabilities in commercial off-the-shelf (COTS) software components are

  10. Study of the Inception Length of Flow over Stepped Spillway Models ...

    African Journals Online (AJOL)

    The results showed that the inception (development) length increases as the unit discharge increases and it decreases with an increase in both stepped roughness height and chute angle. The ratio of the development length, in this study, to that of Bauer's was found to be 4:5. Finally, SMM-5 produced the least velocity of ...

  11. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: the CANPLAY Surveillance Study.

    Science.gov (United States)

    Tudor-Locke, Catrine; Craig, Cora L; Cameron, Christine; Griffiths, Joseph M

    2011-06-25

    This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI) and time typically spent watching television between school and dinner. Young people (aged 5-19 years) were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Girls had a lower median steps/day (10682 versus 11059 for boys) and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys). 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, television between school and dinner on school days. Adjusting for child's age and sex and parental education, the odds of a child being obese decreased by 20% for every extra 3000 steps/day and increased by 21% for every 30 minutes of television watching. There was no association of being overweight with steps/day, however the odds of being overweight increased by 8% for every 30 minutes of additional time spent watching television between school and dinner on a typical school day. Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by

  12. A high-order solver for unsteady incompressible Navier-Stokes equations using the flux reconstruction method on unstructured grids with implicit dual time stepping

    Science.gov (United States)

    Cox, Christopher; Liang, Chunlei; Plesniak, Michael

    2015-11-01

    This paper reports development of a high-order compact method for solving unsteady incompressible flow on unstructured grids with implicit time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ the classical artificial compressibility treatment, where dual time stepping is needed to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time-stepping scheme. Three-dimensional results computed on many processing elements will be presented. The high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. Financial support provided under the GW Presidential Merit Fellowship.

  13. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  14. Modeling and simulation of GaN step-up power switched capacitor converter

    Science.gov (United States)

    Alateeq, Ayoob S.; Almalaq, Yasser A.; Matin, Mohammad A.

    2017-08-01

    This paper discusses a proposed DC-DC switched capacitor converter for low voltage electronic products. The proposed converter is a two-level power switched capacitor (PSC) which is a boost converter. The suitability to convert a voltage into four times higher than its input is one of the converter's objectives. Because of the proposed two-level PSC consist of eight switches and five capacitors, it occupies a small area of the electronic products. The eight switches were selected to be GaN transistors to maintain the efficiency at high rated power or high temperatures. The LTSpice simulator was used to test the proposed model. Since the design contains semiconductor elements such (GaN transistor), then 10% error is a reasonable variance between the mathematical and simulation results.

  15. Modelling Limit Order Execution Times from Market Data

    Science.gov (United States)

    Kim, Adlar; Farmer, Doyne; Lo, Andrew

    2007-03-01

    Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.

  16. Explaining the Relationship Between Sexually Explicit Internet Material and Casual Sex: A Two-Step Mediation Model.

    Science.gov (United States)

    Vandenbosch, Laura; van Oosten, Johanna M F

    2018-03-19

    Despite increasing interest in the implications of adolescents' use of sexually explicit Internet material (SEIM), we still know little about the relationship between SEIM use and adolescents' casual sexual activities. Based on a three-wave online panel survey study among Dutch adolescents (N = 1079; 53.1% boys; 93.5% with an exclusively heterosexual orientation; M age  = 15.11; SD = 1.39), we found that watching SEIM predicted engagement in casual sex over time. In turn, casual sexual activities partially predicted adolescents' use of SEIM. A two-step mediation model was tested to explain the relationship between watching SEIM and casual sex. It was partially confirmed. First, watching SEIM predicted adolescents' perceptions of SEIM as a relevant information source from Wave 2 to Wave 3, but not from Wave 1 to Wave 2. Next, such perceived utility of SEIM was positively related to stronger instrumental attitudes toward sex and thus their views about sex as a core instrument for sexual gratification. Lastly, adolescents' instrumental attitudes toward sex predicted adolescents' engagement in casual sex activities consistently across waves. Partial support emerged for a reciprocal relationship between watching SEIM and perceived utility. We did not find a reverse relationship between casual sex activities and instrumental attitudes toward sex. No significant gender differences emerged.

  17. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    Science.gov (United States)

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The manifold model for space-time

    International Nuclear Information System (INIS)

    Heller, M.

    1981-01-01

    Physical processes happen on a space-time arena. It turns out that all contemporary macroscopic physical theories presuppose a common mathematical model for this arena, the so-called manifold model of space-time. The first part of study is an heuristic introduction to the concept of a smooth manifold, starting with the intuitively more clear concepts of a curve and a surface in the Euclidean space. In the second part the definitions of the Csub(infinity) manifold and of certain structures, which arise in a natural way from the manifold concept, are given. The role of the enveloping Euclidean space (i.e. of the Euclidean space appearing in the manifold definition) in these definitions is stressed. The Euclidean character of the enveloping space induces to the manifold local Euclidean (topological and differential) properties. A suggestion is made that replacing the enveloping Euclidean space by a discrete non-Euclidean space would be a correct way towards the quantization of space-time. (author)

  19. Determination of the mass transfer limiting step of dye adsorption onto commercial adsorbent by using mathematical models.

    Science.gov (United States)

    Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov

    2014-01-01

    Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.

  20. RTMOD: Real-Time MODel evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Graziani, G; Galmarini, S. [Joint Research centre, Ispra (Italy); Mikkelsen, T. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept. (Denmark)

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  1. A stepwise model for simulation-based curriculum development for clinical skills, a modification of the six-step approach.

    Science.gov (United States)

    Khamis, Nehal N; Satava, Richard M; Alnassar, Sami A; Kern, David E

    2016-01-01

    Despite the rapid growth in the use of simulation in health professions education, courses vary considerably in quality. Many do not integrate efficiently into an overall school/program curriculum or conform to academic accreditation requirements. Moreover, some of the guidelines for simulation design are specialty specific. We designed a model that integrates best practices for effective simulation-based training and a modification of Kern et al.'s 6-step approach for curriculum development. We invited international simulation and health professions education experts to complete a questionnaire evaluating the model. We reviewed comments and suggested modifications from respondents and reached consensus on a revised version of the model. We recruited 17 simulation and education experts. They expressed a consensus on the seven proposed curricular steps: problem identification and general needs assessment, targeted needs assessment, goals and objectives, educational strategies, individual assessment/feedback, program evaluation, and implementation. We received several suggestions for descriptors that applied the steps to simulation, leading to some revisions in the model. We have developed a model that integrates principles of curriculum development and simulation design that is applicable across specialties. Its use could lead to high-quality simulation courses that integrate efficiently into an overall curriculum.

  2. Mixed models for data from thorough QT studies: part 2. One-step assessment of conditional QT prolongation.

    Science.gov (United States)

    Schall, Robert

    2011-01-01

    We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation. Copyright © 2010 John Wiley & Sons, Ltd.

  3. RTMOD: Real-Time MODel evaluation

    DEFF Research Database (Denmark)

    Graziani, G.; Galmarini, S.; Mikkelsen, Torben

    2000-01-01

    the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results....... At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... during the ETEX exercises suggested the development of this project. RTMOD featured a web-baseduser-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration...

  4. Time Modeling: Salvatore Sciarrino, Windows and Beclouding

    Directory of Open Access Journals (Sweden)

    Acácio Tadeu de Camargo Piedade

    2017-08-01

    Full Text Available In this article I intend to discuss one of the figures created by the Italian composer Salvatore Sciarrino: the windowed form. After the composer's explanation of this figure, I argue that windows in composition can open inwards and outwards the musical discourse. On one side, they point to the composition's inner ambiences and constitute an internal remission. On the other, they instigate the audience to comprehend the external reference, thereby constructing intertextuality. After the outward window form, I will consider some techniques of distortion, particularly one that I call beclouding. To conclude, I will comment the question of memory and of compostition as time modeling.

  5. Gas flushing through hyper-acidic crater lakes: the next steps within a reframed monitoring time window

    Science.gov (United States)

    Rouwet, Dmitri

    2016-04-01

    evaporative degassing plumes can be useful as monitoring tool on the short-term, but only if the underlying process of gas flushing through acidic lakes is better understood, and linked with the lake water chemistry; (2) The second method forgets about chemical kinetics, degassing models and dynamics of phreatic eruptions, and sticks to the classical principle in geology of "the past is the key for the future". How did lake chemistry parameters vary during the various stages of unrest and eruption, on a purely mathematical basis? Can we recognise patterns in the numerical values related to the changes in volcanic activity? Water chemistry only as a monitoring tool for extremely dynamic and erupting crater lake systems, is inefficient in revealing short-term precursors for single phreatic eruptions, within the current perspective of the residence time dependent monitoring time window. The monitoring rules established since decades based only on water chemistry have thus somehow become obsolete and need revision.

  6. Effect of a simple two-step warfarin dosing algorithm on anticoagulant control as measured by time in therapeutic range : a pilot study

    NARCIS (Netherlands)

    Kim, Y. -K.; Nieuwlaat, R.; Connolly, S. J.; Schulman, S.; Meijer, K.; Raju, N.; Kaatz, S.; Eikelboom, J. W.

    Background: The efficacy and safety of vitamin K antagonists for the prevention of thromboembolism are dependent on the time for which the International Normalized Ratio (INR) is in the therapeutic range. The objective of our study was to determine the effect of introducing a simple two-step dosing

  7. Adaptively time stepping the stochastic Landau-Lifshitz-Gilbert equation at nonzero temperature: Implementation and validation in MuMax3

    Science.gov (United States)

    Leliaert, J.; Mulkers, J.; De Clercq, J.; Coene, A.; Dvornik, M.; Van Waeyenberge, B.

    2017-12-01

    Thermal fluctuations play an increasingly important role in micromagnetic research relevant for various biomedical and other technological applications. Until now, it was deemed necessary to use a time stepping algorithm with a fixed time step in order to perform micromagnetic simulations at nonzero temperatures. However, Berkov and Gorn have shown in [D. Berkov and N. Gorn, J. Phys.: Condens. Matter,14, L281, 2002] that the drift term which generally appears when solving stochastic differential equations can only influence the length of the magnetization. This quantity is however fixed in the case of the stochastic Landau-Lifshitz-Gilbert equation. In this paper, we exploit this fact to straightforwardly extend existing high order solvers with an adaptive time stepping algorithm. We implemented the presented methods in the freely available GPU-accelerated micromagnetic software package MuMax3 and used it to extensively validate the presented methods. Next to the advantage of having control over the error tolerance, we report a twenty fold speedup without a loss of accuracy, when using the presented methods as compared to the hereto best practice of using Heun's solver with a small fixed time step.

  8. Characterization of olive oil volatiles by multi-step direct thermal desorption-comprehensive gas chromatography-time-of-flight mass spectrometry using a programmed temperature vaporizing injector

    NARCIS (Netherlands)

    de Koning, S.; Kaal, E.; Janssen, H.-G.; van Platerink, C.; Brinkman, U.A.Th.

    2008-01-01

    The feasibility of a versatile system for multi-step direct thermal desorption (DTD) coupled to comprehensive gas chromatography (GC × GC) with time-of-flight mass spectrometric (TOF-MS) detection is studied. As an application the system is used for the characterization of fresh versus aged olive

  9. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    DEFF Research Database (Denmark)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song

    2015-01-01

    -empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated......, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model...

  10. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    Directory of Open Access Journals (Sweden)

    Mona Hassan Aburahma

    2015-07-01

    Full Text Available Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated.

  11. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    Science.gov (United States)

    Aburahma, Mona Hassan

    2015-01-01

    Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated. PMID:28975906

  12. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students' Participation.

    Science.gov (United States)

    Aburahma, Mona Hassan

    2015-07-27

    Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students' enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students' engagement and learning is currently being investigated.

  13. Sensitivity of microwave ablation models to tissue biophysical properties: A first step toward probabilistic modeling and treatment planning.

    Science.gov (United States)

    Sebek, Jan; Albin, Nathan; Bortel, Radoslav; Natarajan, Bala; Prakash, Punit

    2016-05-01

    Computational models of microwave ablation (MWA) are widely used during the design optimization of novel devices and are under consideration for patient-specific treatment planning. The objective of this study was to assess the sensitivity of computational models of MWA to tissue biophysical properties. The Morris method was employed to assess the global sensitivity of the coupled electromagnetic-thermal model, which was implemented with the finite element method (FEM). The FEM model incorporated temperature dependencies of tissue physical properties. The variability of the model was studied using six different outputs to characterize the size and shape of the ablation zone, as well as impedance matching of the ablation antenna. Furthermore, the sensitivity results were statistically analyzed and absolute influence of each input parameter was quantified. A framework for systematically incorporating model uncertainties for treatment planning was suggested. A total of 1221 simulations, incorporating 111 randomly sampled starting points, were performed. Tissue dielectric parameters, specifically relative permittivity, effective conductivity, and the threshold temperature at which they transitioned to lower values (i.e., signifying desiccation), were identified as the most influential parameters for the shape of the ablation zone and antenna impedance matching. Of the thermal parameters considered in this study, the nominal blood perfusion rate and the temperature interval across which the tissue changes phase were identified as the most influential. The latent heat of tissue water vaporization and the volumetric heat capacity of the vaporized tissue were recognized as the least influential parameters. Based on the evaluation of absolute changes, the most important parameter (perfusion) had approximately 40.23 times greater influence on ablation area than the least important parameter (volumetric heat capacity of vaporized tissue). Another significant input parameter

  14. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  15. Feedback min-max model predictive control using robust one-step sets

    Science.gov (United States)

    Cychowski, Marcin T.; O'Mahony, Tom

    2010-07-01

    A solution to the infinite-horizon min-max model predictive control (MPC) problem of constrained polytopic systems has recently been defined in terms of a sequence of free control moves over a fixed horizon and a state feedback law in the terminal region using a time-varying terminal cost. The advantage of this formulation is the enlargement of the admissible set of initial states without sacrificing local optimality, but this comes at the expense of higher computational complexity. This article, by means of a counterexample, shows that the robust feasibility and stability properties of such algorithms are not, in general, guaranteed when more than one control move is adopted. For this reason, this work presents a novel formulation of min-max MPC based on the concept of within-horizon feedback and robust contractive set theory that ensures robust stability for any choice of the control horizon. A parameter-dependent feedback extension is also proposed and analysed. The effectiveness of the algorithms is demonstrated with two numerical examples.

  16. Canadian children's and youth's pedometer-determined steps/day, parent-reported TV watching time, and overweight/obesity: The CANPLAY Surveillance Study

    OpenAIRE

    Tudor-Locke, Catrine; Craig, Cora L; Cameron, Christine; Griffiths, Joseph M

    2011-01-01

    Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI) and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years) were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents...

  17. Generation of postured voxel-based human models for the study of step voltage excited by lightning current

    Science.gov (United States)

    Gao, J.; Munteanu, I.; Müller, W. F. O.; Weiland, T.

    2011-07-01

    With the development of medical technique and computational electromagnetics, high resolution anatomic human models have already been widely developed and used in computation of electromagnetic fields induced in human body. Although these so called voxel-based human models are powerful tools for research on electromagnetic safety, their unchangeable standing posture makes it impossible to simulate a realistic scenario in which people have a lot of different postures. This paper describes a poser program package which was developed as an improved version of the free-from deformation technique to overcome this problem. It can set rotation angles of different human joints and then deform the original human model to make it have different postures. The original whole-body human model can be deformed smoothly, continuity of internal tissues and organs is maintained and the mass of different tissues and organs can be conserved in a reasonable level. As a typical application of the postured human models, this paper also studies the effect of the step voltage due to a lightning strike on the human body. Two voxel-based human body models with standing and walking posture were developed and integrated into simulation models to compute the current density distribution in the human body shocked by the step voltage. In order to speed up the transient simulation, the reduced c technique was used, leading to a speedup factor of around 20. The error introduced by the reduced c technique is discussed and simulation results are presented in detail.

  18. Multiple-step model-experiment matching allows precise definition of dynamical leg parameters in human running.

    Science.gov (United States)

    Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M

    2012-09-21

    The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Use of the step-up action research model to improve trauma-related nursing educational practice.

    Science.gov (United States)

    Seale, Ielse; De Villiers, Johanna C

    2015-10-23

    A lack of authentic learning opportunities influence the quality of emergency training of nursing students. The purpose of this article is to describe how the step-up action research model was used to improve the quality of trauma-related educational practice of undergraduate nursing students. To reduce deaths caused by trauma, healthcare workers should be competent to provide emergency care and collaborate effectively with one another. A simulated mass casualty incident, structured to support the integration of theory into practice, became a more rigorous action research activity which focused on the quality improvement of the mass casualty incident. The results indicated improved student learning; partnership appreciation; improved student coping mechanisms, and increased student exposure. Quality emergency training thus results in better real-life collaboration in emergency contexts. The step-up action research model proved to be a collaborative and flexible process. To improve the quality and rigour of educational programmes it is therefore recommended that the step-up action research model be routinely used in the execution of educational practices.

  20. Time evolution in deparametrized models of loop quantum gravity

    Science.gov (United States)

    Assanioussi, Mehdi; Lewandowski, Jerzy; Mäkinen, Ilkka

    2017-07-01

    An important aspect in understanding the dynamics in the context of deparametrized models of loop quantum gravity (LQG) is to obtain a sufficient control on the quantum evolution generated by a given Hamiltonian operator. More specifically, we need to be able to compute the evolution of relevant physical states and observables with a relatively good precision. In this article, we introduce an approximation method to deal with the physical Hamiltonian operators in deparametrized LQG models, and we apply it to models in which a free Klein-Gordon scalar field or a nonrotational dust field is taken as the physical time variable. This method is based on using standard time-independent perturbation theory of quantum mechanics to define a perturbative expansion of the Hamiltonian operator, the small perturbation parameter being determined by the Barbero-Immirzi parameter β . This method allows us to define an approximate spectral decomposition of the Hamiltonian operators and hence to compute the evolution over a certain time interval. As a specific example, we analyze the evolution of expectation values of the volume and curvature operators starting with certain physical initial states, using both the perturbative method and a straightforward expansion of the expectation value in powers of the time variable. This work represents a first step toward achieving the goal of understanding and controlling the new dynamics developed in Alesci et al. [Phys. Rev. D 91, 124067 (2015), 10.1103/PhysRevD.91.124067] and Assanioussi et al. [Phys. Rev. D 92, 044042 (2015), 10.1103/PhysRevD.92.044042].

  1. Summary of Simplified Two Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydrogen/Oxygen

    Science.gov (United States)

    Marek, C. John; Molnar, Melissa

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.

  2. Simplified Two-Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydorgen/Oxygen

    Science.gov (United States)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.

  3. Improving the Physical Realism and Structural Accuracy of Protein Models by a Two-Step Atomic-Level Energy Minimization

    Science.gov (United States)

    Xu, Dong; Zhang, Yang

    2011-01-01

    Most protein structural prediction algorithms assemble structures as reduced models that represent amino acids by a reduced number of atoms to speed up the conformational search. Building accurate full-atom models from these reduced models is a necessary step toward a detailed function analysis. However, it is difficult to ensure that the atomic models retain the desired global topology while maintaining a sound local atomic geometry because the reduced models often have unphysical local distortions. To address this issue, we developed a new program, called ModRefiner, to construct and refine protein structures from Cα traces based on a two-step, atomic-level energy minimization. The main-chain structures are first constructed from initial Cα traces and the side-chain rotamers are then refined together with the backbone atoms with the use of a composite physics- and knowledge-based force field. We tested the method by performing an atomic structure refinement of 261 proteins with the initial models constructed from both ab initio and template-based structure assemblies. Compared with other state-of-art programs, ModRefiner shows improvements in both global and local structures, which have more accurate side-chain positions, better hydrogen-bonding networks, and fewer atomic overlaps. ModRefiner is freely available at http://zhanglab.ccmb.med.umich.edu/ModRefiner. PMID:22098752

  4. Improving the physical realism and structural accuracy of protein models by a two-step atomic-level energy minimization.

    Science.gov (United States)

    Xu, Dong; Zhang, Yang

    2011-11-16

    Most protein structural prediction algorithms assemble structures as reduced models that represent amino acids by a reduced number of atoms to speed up the conformational search. Building accurate full-atom models from these reduced models is a necessary step toward a detailed function analysis. However, it is difficult to ensure that the atomic models retain the desired global topology while maintaining a sound local atomic geometry because the reduced models often have unphysical local distortions. To address this issue, we developed a new program, called ModRefiner, to construct and refine protein structures from Cα traces based on a two-step, atomic-level energy minimization. The main-chain structures are first constructed from initial Cα traces and the side-chain rotamers are then refined together with the backbone atoms with the use of a composite physics- and knowledge-based force field. We tested the method by performing an atomic structure refinement of 261 proteins with the initial models constructed from both ab initio and template-based structure assemblies. Compared with other state-of-art programs, ModRefiner shows improvements in both global and local structures, which have more accurate side-chain positions, better hydrogen-bonding networks, and fewer atomic overlaps. ModRefiner is freely available at http://zhanglab.ccmb.med.umich.edu/ModRefiner. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Kinect-based choice reaching and stepping reaction time tests for clinical and in-home assessment of fall risk in older people: a prospective study.

    Science.gov (United States)

    Ejupi, Andreas; Gschwind, Yves J; Brodie, Matthew; Zagler, Wolfgang L; Lord, Stephen R; Delbaere, Kim

    2016-01-01

    Quick protective reactions such as reaching or stepping are important to avoid a fall or minimize injuries. We developed Kinect-based choice reaching and stepping reaction time tests (Kinect-based CRTs) and evaluated their ability to differentiate between older fallers and non-fallers and the feasibility of administering them at home. A total of 94 community-dwelling older people were assessed on the Kinect-based CRTs in the laboratory and were followed-up for falls for 6 months. Additionally, a subgroup (n = 20) conducted the Kinect-based CRTs at home. Signal processing algorithms were developed to extract features for reaction, movement and the total time from the Kinect skeleton data. Nineteen participants (20.2 %) reported a fall in the 6 months following the assessment. The reaction time (fallers: 797 ± 136 ms, non-fallers: 714 ± 89 ms), movement time (fallers: 392 ± 50 ms, non-fallers: 358 ± 51 ms) and total time (fallers: 1189 ± 170 ms, non-fallers: 1072 ± 109 ms) of the reaching reaction time test differentiated well between the fallers and non-fallers. The stepping reaction time test did not significantly discriminate between the two groups in the prospective study. The correlations between the laboratory and in-home assessments were 0.689 for the reaching reaction time and 0.860 for stepping reaction time. The study findings indicate that the Kinect-based CRT tests are feasible to administer in clinical and in-home settings, and thus represents an important step towards the development of sensor-based fall risk self-assessments. With further validation, the assessments may prove useful as a fall risk screen and home-based assessment measures for monitoring changes over time and effects of fall prevention interventions.

  6. A One-Step-Ahead Smoothing-Based Joint Ensemble Kalman Filter for State-Parameter Estimation of Hydrological Models

    KAUST Repository

    El Gharamti, Mohamad

    2015-11-26

    The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation strategy. In this study, we introduce a new smoothing-based joint EnKF scheme, in which we introduce a one-step-ahead smoothing of the state before updating the parameters. Numerical experiments are performed with a two-dimensional synthetic subsurface contaminant transport model. The improved performance of the proposed joint EnKF scheme compared to the standard joint EnKF compensates for the modest increase in the computational cost.

  7. Assimilation of LAI time-series in crop production models

    Science.gov (United States)

    Kooistra, Lammert; Rijk, Bert; Nannes, Louis

    2014-05-01

    Agriculture is worldwide a large consumer of freshwater, nutrients and land. Spatial explicit agricultural management activities (e.g., fertilization, irrigation) could significantly improve efficiency in resource use. In previous studies and operational applications, remote sensing has shown to be a powerful method for spatio-temporal monitoring of actual crop status. As a next step, yield forecasting by assimilating remote sensing based plant variables in crop production models would improve agricultural decision support both at the farm and field level. In this study we investigated the potential of remote sensing based Leaf Area Index (LAI) time-series assimilated in the crop production model LINTUL to improve yield forecasting at field level. The effect of assimilation method and amount of assimilated observations was evaluated. The LINTUL-3 crop production model was calibrated and validated for a potato crop on two experimental fields in the south of the Netherlands. A range of data sources (e.g., in-situ soil moisture and weather sensors, destructive crop measurements) was used for calibration of the model for the experimental field in 2010. LAI from cropscan field radiometer measurements and actual LAI measured with the LAI-2000 instrument were used as input for the LAI time-series. The LAI time-series were assimilated in the LINTUL model and validated for a second experimental field on which potatoes were grown in 2011. Yield in 2011 was simulated with an R2 of 0.82 when compared with field measured yield. Furthermore, we analysed the potential of assimilation of LAI into the LINTUL-3 model through the 'updating' assimilation technique. The deviation between measured and simulated yield decreased from 9371 kg/ha to 8729 kg/ha when assimilating weekly LAI measurements in the LINTUL model over the season of 2011. LINTUL-3 furthermore shows the main growth reducing factors, which are useful for farm decision support. The combination of crop models and sensor

  8. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  9. Grey-box Modeling for System Identification of Household Refrigerators: a Step Toward Smart Appliances

    DEFF Research Database (Denmark)

    Costanzo, Giuseppe Tommaso; Sossan, Fabrizio; Marinelli, Mattia

    2013-01-01

    This paper presents the grey-box modeling of a vapor-compression refrigeration system for residential applications based on maximum likelihood estimation of parameters in stochastic differential equations. Models obtained are useful in the view of controlling refrigerators as flexible consumption......, such as heat pumps for space heating, in order to smooth the load factor during peak hours, enhance reliability and efficiency in power networks and reduce operational costs.......This paper presents the grey-box modeling of a vapor-compression refrigeration system for residential applications based on maximum likelihood estimation of parameters in stochastic differential equations. Models obtained are useful in the view of controlling refrigerators as flexible consumption...

  10. Modeling of the steam hydrolysis in a two-step process for hydrogen production by solar concentrated energy

    Science.gov (United States)

    Valle-Hernández, Julio; Romero-Paredes, Hernando; Pacheco-Reyes, Alejandro

    2017-06-01

    In this paper the simulation of the steam hydrolysis for hydrogen production through the decomposition of cerium oxide is presented. The thermochemical cycle for hydrogen production consists of the endothermic reduction of CeO2 to lower-valence cerium oxide, at high temperature, where concentrated solar energy is used as a source of heat; and of the subsequent steam hydrolysis of the resulting cerium oxide to produce hydrogen. The modeling of endothermic reduction step was presented at the Solar Paces 2015. This work shows the modeling of the exothermic step; the hydrolysis of the cerium oxide (III) to form H2 and the corresponding initial cerium oxide made at lower temperature inside the solar reactor. For this model, three sections of the pipe where the reaction occurs were considered; the steam water inlet, the porous medium and the hydrogen outlet produced. The mathematical model describes the fluid mechanics; mass and energy transfer occurring therein inside the tungsten pipe. Thermochemical process model was simulated in CFD. The results show a temperature distribution in the solar reaction pipe and allow obtaining the fluid dynamics and the heat transfer within the pipe. This work is part of the project "Solar Fuels and Industrial Processes" from the Mexican Center for Innovation in Solar Energy (CEMIE-Sol).

  11. Statistical modeling of tear strength for one step fixation process of reactive printing and easy care finishing

    International Nuclear Information System (INIS)

    Asim, F.; Mahmood, M.

    2017-01-01

    Statistical modeling imparts significant role in predicting the impact of potential factors affecting the one step fixation process of reactive printing and easy care finishing. Investigation of significant factors on tear strength of cotton fabric for single step fixation of reactive printing and easy care finishing has been carried out in this research work using experimental design technique. The potential design factors were; concentration of reactive dye, concentration of crease resistant, fixation method and fixation temperature. The experiments were designed using DoE (Design of Experiment) and analyzed through software Design Expert. The detailed analysis of significant factors and interactions including ANOVA (Analysis of Variance), residuals, model accuracy and statistical model for tear strength has been presented. The interaction and contour plots of vital factors has been examined. It has been found from the statistical analysis that each factor has an interaction with other factor. Most of the investigated factors showed curvature effect on other factor. After critical examination of significant plots, quadratic model of tear strength with significant terms and their interaction at alpha = 0.05 has been developed. The calculated correlation coefficient, R2 of the developed model is 0.9056. The high values of correlation coefficient inferred that developed equation of tear strength will precisely predict the tear strength over the range of values. (author)

  12. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  13. Prenatal cleft lip and maxillary alveolar defect repair in a 2-step fetal lamb model.

    NARCIS (Netherlands)

    Wenghoefer, M.H.; Deprest, J.; Goetz, W.; Kuijpers-Jagtman, A.M.; Bergé, S.J.

    2007-01-01

    PURPOSE: As there is no satisfying animal model simulating the complex cleft lip and palate anatomy in a standardized defect on one hand, and comprising the possibilities for extensive surgical procedures on the other hand, an improved fetal lamb model for cleft surgery was developed. MATERIALS AND

  14. Practical Steps for Informing Literacy Instruction: A Diagnostic Decision-Making Model.

    Science.gov (United States)

    Kibby, Michael W.

    This monograph presents a diagnostic decision-making model for reading, elementary, and special education teachers to use as a guide in assessing and evaluating students' reading abilities to design and provide more appropriate reading instruction. The model in the monograph gives an overall perspective or gestalt of the components and strategies…

  15. Mechanical model of the recovery reaction from stumbling: effect of step length on trunk control

    NARCIS (Netherlands)

    Forner-Cordero, A.; Koopman, Hubertus F.J.M.; van der Helm, F.C.T.

    2014-01-01

    Falling after a gait perturbation is a major problem for elderly people. The goal of this paper is to model some mechanical limitations of the recovery strategies performed after a trip or stumble, such as elevating or lowering strategies. A biomechanical model of the recovery was used to interpret

  16. First Steps in Computational Systems Biology: A Practical Session in Metabolic Modeling and Simulation

    Science.gov (United States)

    Reyes-Palomares, Armando; Sanchez-Jimenez, Francisca; Medina, Miguel Angel

    2009-01-01

    A comprehensive understanding of biological functions requires new systemic perspectives, such as those provided by systems biology. Systems biology approaches are hypothesis-driven and involve iterative rounds of model building, prediction, experimentation, model refinement, and development. Developments in computer science are allowing for ever…

  17. The Role of Peer Tutoring: Steps to Describing a Three-Dimensional Model.

    Science.gov (United States)

    Davis, Kevin

    A comprehensive, three-dimensional model of peer tutoring, constructed by gathering current theories and research and locating them on a dynamic continuum of the tutoring process, allows researchers to break new ground in tutor research and might eventually offer a new heuristic for training peer tutors. The first axis in the model, the focus…

  18. The Partnering with Patients Model of Nursing Interventions: A First Step to a Practice Theory

    Science.gov (United States)

    Moyle, Wendy; Rickard, Claire M.; Chambers, Suzanne K.; Chaboyer, Wendy

    2015-01-01

    The development of a body of knowledge, gained through research and theory building, is one hallmark of a profession. This paper presents the “Partnering with Patients Model of Nursing Interventions”, providing direction towards how complex nursing interventions can be developed, tested and subsequently adopted into practice. Coalescence of understanding of patient-centred care, the capabilities approach and the concept of complex healthcare interventions led to the development of the model assumptions and concepts. Application of the model to clinical practice is described, including presentation of a case study, and areas for future research including understanding both patients’ and nurses’ perceptions and experiences when the model is in use, and testing the effect of nursing interventions based on the model are recommended. PMID:27417760

  19. The Partnering with Patients Model of Nursing Interventions: A First Step to a Practice Theory.

    Science.gov (United States)

    Moyle, Wendy; Rickard, Claire M; Chambers, Suzanne K; Chaboyer, Wendy

    2015-04-24

    The development of a body of knowledge, gained through research and theory building, is one hallmark of a profession. This paper presents the "Partnering with Patients Model of Nursing Interventions", providing direction towards how complex nursing interventions can be developed, tested and subsequently adopted into practice. Coalescence of understanding of patient-centred care, the capabilities approach and the concept of complex healthcare interventions led to the development of the model assumptions and concepts. Application of the model to clinical practice is described, including presentation of a case study, and areas for future research including understanding both patients' and nurses' perceptions and experiences when the model is in use, and testing the effect of nursing interventions based on the model are recommended.

  20. Modeling Coastal Vulnerability through Space and Time.

    Directory of Open Access Journals (Sweden)

    Thomas Hopper

    Full Text Available Coastal ecosystems experience a wide range of stressors including wave forces, storm surge, sea-level rise, and anthropogenic modification and are thus vulnerable to erosion. Urban coastal ecosystems are especially important due to the large populations these limited ecosystems serve. However, few studies have addressed the issue of urban coastal vulnerability at the landscape scale with spatial data that are finely resolved. The purpose of this study was to model and map coastal vulnerability and the role of natural habitats in reducing vulnerability in Jamaica Bay, New York, in terms of nine coastal vulnerability metrics (relief, wave exposure, geomorphology, natural habitats, exposure, exposure with no habitat, habitat role, erodible shoreline, and surge under past (1609, current (2015, and future (2080 scenarios using InVEST 3.2.0. We analyzed vulnerability results both spatially and across all time periods, by stakeholder (ownership and by distance to damage from Hurricane Sandy. We found significant differences in vulnerability metrics between past, current and future scenarios for all nine metrics except relief and wave exposure. The marsh islands in the center of the bay are currently vulnerable. In the future, these islands will likely be inundated, placing additional areas of the shoreline increasingly at risk. Significant differences in vulnerability exist between stakeholders; the Breezy Point Cooperative and Gateway National Recreation Area had the largest erodible shoreline segments. Significant correlations exist for all vulnerability (exposure/surge and storm damage combinations except for exposure and distance to artificial debris. Coastal protective features, ranging from storm surge barriers and levees to natural features (e.g. wetlands, have been promoted to decrease future flood risk to communities in coastal areas around the world. Our methods of combining coastal vulnerability results with additional data and across

  1. Modeling Coastal Vulnerability through Space and Time.

    Science.gov (United States)

    Hopper, Thomas; Meixler, Marcia S

    2016-01-01

    Coastal ecosystems experience a wide range of stressors including wave forces, storm surge, sea-level rise, and anthropogenic modification and are thus vulnerable to erosion. Urban coastal ecosystems are especially important due to the large populations these limited ecosystems serve. However, few studies have addressed the issue of urban coastal vulnerability at the landscape scale with spatial data that are finely resolved. The purpose of this study was to model and map coastal vulnerability and the role of natural habitats in reducing vulnerability in Jamaica Bay, New York, in terms of nine coastal vulnerability metrics (relief, wave exposure, geomorphology, natural habitats, exposure, exposure with no habitat, habitat role, erodible shoreline, and surge) under past (1609), current (2015), and future (2080) scenarios using InVEST 3.2.0. We analyzed vulnerability results both spatially and across all time periods, by stakeholder (ownership) and by distance to damage from Hurricane Sandy. We found significant differences in vulnerability metrics between past, current and future scenarios for all nine metrics except relief and wave exposure. The marsh islands in the center of the bay are currently vulnerable. In the future, these islands will likely be inundated, placing additional areas of the shoreline increasingly at risk. Significant differences in vulnerability exist between stakeholders; the Breezy Point Cooperative and Gateway National Recreation Area had the largest erodible shoreline segments. Significant correlations exist for all vulnerability (exposure/surge) and storm damage combinations except for exposure and distance to artificial debris. Coastal protective features, ranging from storm surge barriers and levees to natural features (e.g. wetlands), have been promoted to decrease future flood risk to communities in coastal areas around the world. Our methods of combining coastal vulnerability results with additional data and across multiple time

  2. A step-indexed Kripke model of hidden state via recursive properties on recursively defined metric spaces

    DEFF Research Database (Denmark)

    Schwinghammer, Jan; Birkedal, Lars; Støvring, Kristian

    2011-01-01

    Frame and anti-frame rules have been proposed as proof rules for modular reasoning about programs. Frame rules allow one to hide irrelevant parts of the state during verification, whereas the anti-frame rule allows one to hide local state from the context. We give the first sound model for Chargu......´eraud and Pottier’s type and capability system including both frame and anti-frame rules. The model is a possible worlds model based on the operational semantics and step-indexed heap relations, and the worlds are constructed as a recursively defined predicate on a recursively defined metric space. We also extend...... the model to account for Pottier’s generalized frame and anti-frame rules, where invariants are generalized to families of invariants indexed over pre-orders. This generalization enables reasoning about some well-bracketed as well as (locally) monotonic uses of local state....

  3. Two-step grafting significantly enhances the survival of foetal dopaminergic transplants and induces graft-derived vascularisation in a 6-OHDA model of Parkinson's disease.

    Science.gov (United States)

    Büchele, Fabian; Döbrössy, Máté; Hackl, Christina; Jiang, Wei; Papazoglou, Anna; Nikkhah, Guido

    2014-08-01

    Following transplantation of foetal primary dopamine (DA)-rich tissue for neurorestaurative treatment of Parkinson's disease (PD), only 5-10% of the functionally relevant DAergic cells survive both in experimental models and in clinical studies. The current work tested how a two-step grafting protocol could have a positive impact on graft survival. DAergic tissue is divided in two portions and grafted in two separate sessions into the same target area within a defined time interval. We hypothesized that the first graft creates a "DAergic" microenvironment or "nest" similar to the perinatal substantia nigra that stimulates and protects the second graft. 6-OHDA-lesioned rats were sequentially transplanted with wild-type (GFP-, first graft) and transgenic (GFP+, second graft) DAergic cells in time interims of 2, 5 or 9days. Each group was further divided into two sub-groups receiving either 200k (low cell number groups: 2dL, 5dL, 9dL) or 400k cells (high cell number groups: 2dH, 5dH, 9dH) as first graft. During the second transplantation, all groups received the same amount of 200k GFP+ cells. Controls received either low or high cell numbers in one single session (standard protocol). Drug-induced rotations, at 2 and 6weeks after grafting, showed significant improvement compared to the baseline lesion levels without significant differences between the groups. Rats were sacrificed 8weeks after transplantation for post-mortem histological assessment. Both two-step groups with the time interval of 2days (2dL and 2dH) showed a significantly higher survival of DAergic cells compared to their respective standard control group (2dL, +137%; 2dH, +47%). Interposing longer intervals of 5 or 9days resulted in the loss of statistical significance, neutralising the beneficial two-step grafting effect. Furthermore, the transplants in the 2dL and 2dH groups had higher graft volume and DA-fibre-density values compared to all other two-step groups. They also showed intense growth of

  4. Real time model for public transportation management

    Directory of Open Access Journals (Sweden)

    Ireneusz Celiński

    2014-03-01

    Full Text Available Background: The article outlines managing a public transportation fleet in the dynamic aspect. There are currently many technical possibilities of identifying demand in the transportation network. It is also possible to indicate legitimate basis of estimating and steering demand. The article describes a general public transportation fleet management concept based on balancing demand and supply. Material and methods: The presented method utilizes a matrix description of demand for transportation based on telemetric and telecommunication data. Emphasis was placed mainly on a general concept and not the manner in which data was collected by other researchers.  Results: The above model gave results in the form of a system for managing a fleet in real-time. The objective of the system is also to optimally utilize means of transportation at the disposal of service providers. Conclusions: The presented concept enables a new perspective on managing public transportation fleets. In case of implementation, the project would facilitate, among others, designing dynamic timetables, updated based on observed demand, and even designing dynamic points of access to public transportation lines. Further research should encompass so-called rerouting based on dynamic measurements of the characteristics of the transportation system.

  5. Integration into Big Data: First Steps to Support Reuse of Comprehensive Toxicity Model Modules (SOT)

    Science.gov (United States)

    Data surrounding the needs of human disease and toxicity modeling are largely siloed limiting the ability to extend and reuse modules across knowledge domains. Using an infrastructure that supports integration across knowledge domains (animal toxicology, high-throughput screening...

  6. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    DEFF Research Database (Denmark)

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...

  7. A new heat transfer analysis in machining based on two steps of 3D finite element modelling and experimental validation

    Science.gov (United States)

    Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.

    2013-01-01

    Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.

  8. LLE experimental data, thermodynamic modeling and sensitivity analysis in the ethyl biodiesel from macauba pulp oil settling step.

    Science.gov (United States)

    Basso, Rodrigo Corrêa; da Silva, César Augusto Sodré; Sousa, Camila de Oliveira; Meirelles, Antonio José de Almeida; Batista, Eduardo Augusto Caldas

    2013-03-01

    The aim of this study was to obtain experimental data related to liquid–liquid equilibrium (LLE) of systems containing glycerol + ethanol + ethyl biodiesel from macauba pulp oil, perform thermodynamic modeling and simulate the settling step of this biodiesel using simulation software. Binary interaction parameters were adjusted for NRTL and UNIQUAC models. The UNIFAC-LLE and UNIFAC-Dortmund models were used to predict the LLE of the systems. A sensitivity analysis was applied to the settling step to describe the composition of the output streams as a function of ethanol in the feed stream. Ethanol had greater affinity for the glycerol-rich phase. The deviations between experimental data and calculated values were 0.44%, 1.07%, 3.52% and 2.82%, respectively, using the NRTL, UNIQUAC, UNIFAC-LLE and UNIFAC-Dortmund models. Excess ethanol in the feed stream causes losses of ethyl ester in the glycerol-rich stream and high concentration of glycerol in the ester-rich stream. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Development of one-step real-time reverse transcriptase-PCR-based assays for the rapid and simultaneous detection of four viruses causing porcine diarrhea

    OpenAIRE

    Masuda, Tsuneyuki; Tsuchiaka, Shinobu; Ashiba, Tomoko; Yamasato, Hiroshi; Fukunari, Kazuhiro; Omatsu, Tsutomu; Furuya, Tetsuya; Shirai, Junsuke; Mizutani, Tetsuya; Nagai, Makoto

    2016-01-01

    Porcine diarrhea caused by viruses is a major problem of the pig farming industry and can result in substantial losses of revenue. Thus, diagnosing the infectious agents is important to prevent and control diseases in pigs. We developed novel one-step real-time quantitative RT-PCR (qPCR) assays that can detect four porcine diarrheal viruses simultaneously: porcine epidemic diarrhea virus (PEDV), transmissible gastroenteritis virus (TGEV), porcine deltacoronavirus (PDCoV), and porcine group A ...

  10. Time will tell: community acceptability of HIV vaccine research before and after the “Step Study” vaccine discontinuation

    Directory of Open Access Journals (Sweden)

    Paula M Frew

    2010-09-01

    Full Text Available Paula M Frew1,2,3,4, Mark J Mulligan1,2,3, Su-I Hou5, Kayshin Chan3, Carlos del Rio1,2,3,61Department of Medicine, Division of Infectious Diseases, Emory University School of Medicine, Atlanta, Georgia, USA; 2Emory Center for AIDS Research, Atlanta, Georgia, USA; 3The Hope Clinic of the Emory Vaccine Center, Decatur, Georgia, USA; 4Department of Behavioral Sciences and Health Education, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA; 5Department of Health Promotion and Behavior, College of Public Health, University of Georgia, Athens, Georgia, USA; 6Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, Georgia, USAObjective: This study examines whether men-who-have-sex-with-men (MSM and transgender (TG persons’ attitudes, beliefs, and risk perceptions toward human immunodeficiency virus (HIV vaccine research have been altered as a result of the negative findings from a phase 2B HIV vaccine study.Design: We conducted a cross-sectional survey among MSM and TG persons (N = 176 recruited from community settings in Atlanta from 2007 to 2008. The first group was recruited during an active phase 2B HIV vaccine trial in which a candidate vaccine was being evaluated (the “Step Study”, and the second group was recruited after product futility was widely reported in the media.Methods: Descriptive statistics, t tests, and chi-square tests were conducted to ascertain differences between the groups, and ordinal logistic regressions examined the influences of the above-mentioned factors on a critical outcome, future HIV vaccine study participation. The ordinal regression outcomes evaluated the influences on disinclination, neutrality, and inclination to study participation.Results: Behavioral outcomes such as future recruitment, event attendance, study promotion, and community mobilization did not reveal any differences in participants’ intentions between the groups. However, we observed

  11. Modeling Hand-Over-Hand and Inchworm Steps in Myosin VI

    Science.gov (United States)

    Jack, Amanda; Lowe, Ian; Tehver, Riina

    Myosin VI is a molecular motor protein that moves along actin filaments to transport cargo within a cell. There is much experimental evidence that the myosin VI dimer moves ``hand-over-hand'' along actin; however, recent experiments suggest that the protein can also move via an ``inchworm'' mechanism. We created a mechanochemical kinetic model to predict myosin VI's behavior under different ATP, ADP, and force conditions, taking these alternative mechanisms into account. Our model's calculations agree well with experimental results and can also be used to predict myosin VI's behavior outside experimentally tested regimes, such as under forward force. We also predict an optimized motor function for the protein around physiological (-2 pN) load and anchoring under -3 pN load. By using our model to predict myosin VI's response to environmental change, we can gain insight into the behavior of a protein that can be difficult to observe experimentally.

  12. The next step in coastal numerical models: spectral/hp element methods?

    DEFF Research Database (Denmark)

    Eskilsson, Claes; Engsig-Karup, Allan Peter; Sherwin, Spencer J.

    2005-01-01

    In this paper we outline the application of spectral/hp element methods for modelling nonlinear and dispersive waves. We present one- and two-dimensional test cases for the shallow water equations and Boussinesqtype equations – including highly dispersive Boussinesq-type equations.......In this paper we outline the application of spectral/hp element methods for modelling nonlinear and dispersive waves. We present one- and two-dimensional test cases for the shallow water equations and Boussinesqtype equations – including highly dispersive Boussinesq-type equations....

  13. Ultracold atoms in quasi-one-dimensional traps: A step beyond the Lieb-Liniger model

    Science.gov (United States)

    Jachymski, Krzysztof; Meinert, Florian; Veksler, Hagar; Julienne, Paul S.; Fishman, Shmuel

    2017-05-01

    Ultracold atoms placed in a tight cigar-shaped trap are usually described in terms of the Lieb-Liniger model. We study the extensions of this model which arise when van der Waals interaction between atoms is taken into account. We find that the corrections induced by the finite range of interactions can become especially important in the vicinity of narrow Feshbach resonances and suggest realistic schemes of their experimental detection. The interplay of confinement and interactions can lead to effective transparency where the one-dimensional interactions are weak in a wide range of parameters.

  14. Success rate evaluation of clinical governance implementation in teaching hospitals in Kerman (Iran) based on nine steps of Karsh's model.

    Science.gov (United States)

    Vali, Leila; Mastaneh, Zahra; Mouseli, Ali; Kardanmoghadam, Vida; Kamali, Sodabeh

    2017-07-01

    One of the ways to improve the quality of services in the health system is through clinical governance. This method aims to create a framework for clinical services providers to be accountable in return for continuing improvement of quality and maintaining standards of services. To evaluate the success rate of clinical governance implementation in Kerman teaching hospitals based on 9 steps of Karsh's Model. This cross-sectional study was conducted in 2015 on 94 people including chief executive officers (CEOs), nursing managers, clinical governance managers and experts, head nurses and nurses. The required data were collected through a researcher-made questionnaire containing 38 questions with three-point Likert Scale (good, moderate, and weak). The Karsh's Model consists of nine steps including top management commitment to change, accountability for change, creating a structured approach for change, training, pilot implementation, communication, feedback, simulation, and end-user participation. Data analysis using descriptive statistics and Mann-Whitney-Wilcoxon test was done by SPSS software version 16. About 81.9 % of respondents were female and 74.5 have a Bachelor of Nursing (BN) degree. In general, the status of clinical governance implementation in studied hospitals based on 9 steps of the model was 44 % (moderate). A significant relationship was observed among accountability and organizational position (p=0.0012) and field of study (p=0.000). Also, there were significant relationships between structure-based approach and organizational position (p=0.007), communication and demographic characteristics (p=0.000), and end-user participation with organizational position (p=0.03). Clinical governance should be implemented by correct needs assessment and participation of all stakeholders, to ensure its enforcement in practice, and to enhance the quality of services.

  15. Comprehension of Multiple Documents with Conflicting Information: A Two-Step Model of Validation

    Science.gov (United States)

    Richter, Tobias; Maier, Johanna

    2017-01-01

    In this article, we examine the cognitive processes that are involved when readers comprehend conflicting information in multiple texts. Starting from the notion of routine validation during comprehension, we argue that readers' prior beliefs may lead to a biased processing of conflicting information and a one-sided mental model of controversial…

  16. Three-Step Model of Dispersed Flow Heat Transfer (Post CHF ...

    African Journals Online (AJOL)

    Since the equation is analytically derived, its differentiation with respect to wall superheat will yield the Minimum Heat Flux point. The equation and model provide a very powerful base for analysis and prediction of post Critical Heat Flux heat transfer. The stable film boiling data for dispersed vertical flow of liquid nitrogen ...

  17. Excitable waves and direction-sensing in Dictyostelium discoideum: steps towards a chemotaxis model

    Science.gov (United States)

    Bhowmik, Arpan; Rappel, Wouter-Jan; Levine, Herbert

    2016-02-01

    In recent years, there have been significant advances in our understanding of the mechanisms underlying chemically directed motility by eukaryotic cells such as Dictyostelium. In particular, the local excitation and global inhibition (LEGI) model has proven capable of providing a framework for quantitatively explaining many experiments that present Dictyostelium cells with tailored chemical stimuli and monitor their subsequent polarization. In their natural setting, cells generate their own directional signals via the detection and secretion of cyclic adenosine monophosphate (cAMP). Here, we couple the LEGI approach to an excitable medium model of the cAMP wave-field that is propagated by the cells and investigate the possibility for this class of models to enable accurate chemotaxis to the cAMP waveforms expected in vivo. Our results indicate that the ultra-sensitive version of the model does an excellent job in providing natural wave rectification, thereby providing a compelling solution to the ‘back-of-the-wave paradox’ during cellular aggregation.

  18. Modeling heat and mass transfer in the heat treatment step of yerba maté processing

    Directory of Open Access Journals (Sweden)

    J. M. Peralta

    2007-03-01

    Full Text Available The aim of this research was to estimate the leaf and twig temperature and moisture content of yerba maté branches (Ilex paraguariensis Saint Hilaire during heat treatment, carried out in a rotary kiln dryer. These variables had to be estimated (modeling the heat and mass transfer due to the difficulty of experimental measurement in the dryer. For modeling, the equipment was divided into two zones: the flame or heat treatment zone and the drying zone. The model developed fit well with the experimental data when water loss took place only in leaves. In the first zone, leaf temperature increased until it reached 135°C and then it slowly decreased to 88°C at the exit, despite the gas temperature, which varied in this zone from 460°C to 120°C. Twig temperature increased in the two zones from its inlet temperature (25°C up to 75°C. A model error of about 3% was estimated based on theoretical and experimental data on leaf moisture content.

  19. A Step beyond Univision Evaluation: Using a Systems Model of Performance Improvement.

    Science.gov (United States)

    Sleezer, Catherine M.; Zhang, Jiping; Gradous, Deane B.; Maile, Craig

    1999-01-01

    Examines three views of performance improvement--scientific management, instructional design, and systems thinking--each providing a unique view of performance improvement and specific roles for evaluation. Provides an integrated definition of performance and a synthesis model that encompasses the three views. (AEF)

  20. A Theory of Interest Rate Stepping : Inflation Targeting in a Dynamic Menu Cost Model

    NARCIS (Netherlands)

    Eijffinger, S.C.W.; Schaling, E.; Verhagen, W.H.

    1999-01-01

    Abstract: A stylised fact of monetary policy making is that central banks do not immediately respond to new information but rather seem to prefer to wait until sufficient ‘evidence’ to warrant a change has accumulated. However, theoretical models of inflation targeting imply that an optimising

  1. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    Science.gov (United States)

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  2. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kun Ren

    2014-01-01

    Full Text Available Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  3. A step function model to evaluate the real monetary value of man-sievert with real GDP

    Energy Technology Data Exchange (ETDEWEB)

    Na, Seong H. [Korea Institute of Nuclear Safety, 19 Guseong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of)], E-mail: shna@kins.re.kr; Kim, Sun G. [School of Business, Daejeon University, Yong Woon-dong, Dong-gu, Daejeon 300-716 (Korea, Republic of)], E-mail: sunkim@dju.ac.kr

    2009-07-15

    For use in a cost-benefit analysis to establish optimum levels of radiation protection in Korea under the ALARA principle, we introduce a discrete step function model to evaluate man-sievert monetary value in the real economic value. The model formula, which is unique and country-specific, is composed of real GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and average life expectancy. Unlike previous researches on alpha-value assessment, we show different alpha values in the real term, differentiated with respect to the range of individual doses, which would be more realistic and informative for application to the radiation protection practices. GDP deflators of economy can reflect the society's situations. Finally, we suggest that the Korean model can be generalized simply to other countries without normalizing any country-specific factors.

  4. A step function model to evaluate the real monetary value of man-sievert with real GDP

    International Nuclear Information System (INIS)

    Na, Seong H.; Kim, Sun G.

    2009-01-01

    For use in a cost-benefit analysis to establish optimum levels of radiation protection in Korea under the ALARA principle, we introduce a discrete step function model to evaluate man-sievert monetary value in the real economic value. The model formula, which is unique and country-specific, is composed of real GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and average life expectancy. Unlike previous researches on alpha-value assessment, we show different alpha values in the real term, differentiated with respect to the range of individual doses, which would be more realistic and informative for application to the radiation protection practices. GDP deflators of economy can reflect the society's situations. Finally, we suggest that the Korean model can be generalized simply to other countries without normalizing any country-specific factors.

  5. First steps towards modeling of ion-driven turbulence in Wendelstein 7-X

    Science.gov (United States)

    Warmer, F.; Xanthopoulos, P.; Proll, J. H. E.; Beidler, C. D.; Turkin, Y.; Wolf, R. C.

    2018-01-01

    Due to foreseen improvement of neoclassical confinement in optimised stellarators—like the newly commissioned Wendelstein 7-X (W7-X) experiment in Greifswald, Germany—it is expected that turbulence will significantly contribute to the heat and particle transport, thus posing a limit to the performance of such devices. In order to develop discharge scenarios, it is thus necessary to develop a model which could reliably capture the basic characteristics of turbulence and try to predict the levels thereof. The outcome will not only be affordable, using only a fraction of the computational cost which is normally required for repetitive direct turbulence simulations, but would also highlight important physics. In this model, we seek to describe the ion heat flux caused by ion temperature gradient (ITG) micro-turbulence, which, in certain heating scenarios, can be a strong source of free energy. With the aid of a relatively small number of state-of-the-art nonlinear gyrokinetic simulations, an initial critical gradient model (CGM) is devised, with the aim to replace an empirical model, stemming from observations in prior stellarator experiments. The novel CGM, in its present form, encapsulates all available knowledge about ion-driven 3D turbulence to date, also allowing for further important extensions, towards an accurate interpretation and prediction of the ‘anomalous’ transport. The CGM depends on the stiffness of the ITG turbulence scaling in W7-X, and implicitly includes the nonlinear zonal flow response. It is shown that the CGM is suitable for a 1D framework turbulence modeling.

  6. On the Role of Chemical Kinetics Modeling in the LES of Premixed Bluff Body and Backward-Facing Step Combustors

    KAUST Repository

    Chakroun, Nadim W.

    2017-01-05

    Recirculating flows in the wake of a bluff body, behind a sudden expansion or down-stream of a swirler, are pivotal for anchoring a flame and expanding the stability range. The size and structure of these recirculation zones and the accurate prediction of the length of these zones is a very important characteristic that computational simulations should have. Large eddy simulation (LES) techniques with an appropriate combustion model and reaction mechanism afford a balance between computational complexity and predictive accuracy. In this study, propane/air mixtures were simulated in a bluff-body stabilized combustor based on the Volvo test case and also in a backward-facing step combustor. The main goal is to investigate the role of the chemical mechanism and the accuracy of estimating the extinction strain rate on the prediction of important ow features such as recirculation zones. Two 2-step mechanisms were employed, one which gave reasonable extinction strain rates and another modi ed 2-step mechanism where it grossly over-predicted the values. This modified mechanism under-predicted recirculation zone lengths compared to the original mechanism and had worse agreement with experiments in both geometries. While the recirculation zone lengths predicted by both reduced mechanisms in the step combustor scale linearly with the extinction strain rate, the scaling curves do not match experimental results as none of the simpli ed mechanisms produce extinction strain rates that are consistent with those predicted by the comprehensive mechanisms. We conclude that it is very important that a chemical mechanism is able to correctly predict extinction strain rates if it is to be used in CFD simulations.

  7. The electric field in capacitively coupled RF discharges: a smooth step model that includes thermal and dynamic effects

    Science.gov (United States)

    Brinkmann, Ralf Peter

    2015-12-01

    The electric field in radio-frequency driven capacitively coupled plasmas (RF-CCP) is studied, taking thermal (finite electron temperature) and dynamic (finite electron mass) effects into account. Two dimensionless numbers are introduced, the ratios ε ={λ\\text{D}}/l of the electron Debye length {λ\\text{D}} to the minimum plasma gradient length l (typically the sheath thickness) and η ={ω\\text{RF}}/{ω\\text{pe}} of the RF frequency {ω\\text{RF}} to the electron plasma frequency {ω\\text{pe}} . Assuming both numbers small but finite, an asymptotic expansion of an electron fluid model is carried out up to quadratic order inclusively. An expression for the electric field is obtained which yields (i) the space charge field in the sheath, (ii) the generalized Ohmic and ambipolar field in the plasma, and (iii) a smooth interpolation for the transition in between. The new expression is a direct generalization of the Advanced Algebraic Approximation (AAA) proposed by the same author (2009 J. Phys. D: Appl. Phys. 42 194009), which can be recovered for η \\to 0 , and of the established Step Model (SM) by Godyak (1976 Sov. J. Plasma Phys. 2 78), which corresponds to the simultaneous limits η \\to 0 , ε \\to 0 . A comparison of the hereby proposed Smooth Step Model (SSM) with a numerical solution of the full dynamic problem proves very satisfactory.

  8. Time series modelling of overflow structures

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.

    1997-01-01

    The dynamics of a storage pipe is examined using a grey-box model based on on-line measured data. The grey-box modelling approach uses a combination of physically-based and empirical terms in the model formulation. The model provides an on-line state estimate of the overflows, pumping capacities...... to the overflow structures. The capacity of a pump draining the storage pipe has been estimated for two rain events, revealing that the pump was malfunctioning during the first rain event. The grey-box modelling approach is applicable for automated on-line surveillance and control. (C) 1997 IAWQ. Published...

  9. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species

    Directory of Open Access Journals (Sweden)

    P Das

    2017-01-01

    Full Text Available Purpose: Standardization of Aspergillus polymerase chain reaction (PCR poses two technical challenges (a standardization of DNA extraction, (b optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Materials and Methods: Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR, the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus, gene and calmodulin gene (for Aspergillus niger. Results: Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. Conclusion: The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  10. Standardization of a two-step real-time polymerase chain reaction based method for species-specific detection of medically important Aspergillus species.

    Science.gov (United States)

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A

    2017-01-01

    Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.

  11. Magnetic-time model for seed germination | Mahajan | African ...

    African Journals Online (AJOL)

    On the basis of this, a new germination model called magnetic time model is developed which was incorporated in hydrothermal model and hence nominated as hydrothermal magnetic time model which is proposed to incorporate the effect of magnetic field of different intensities on plants. Magnetic time constant ΘB is ...

  12. Continuous time structural equation modeling with R package ctsem

    NARCIS (Netherlands)

    Driver, C.C.; Oud, J.H.L.; Völkle, M.C.

    2017-01-01

    We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1) and time series (N = 1) data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models) in the social and behavioural sciences are discrete time models. An

  13. Time series sightability modeling of animal populations.

    Directory of Open Access Journals (Sweden)

    Althea A ArchMiller

    Full Text Available Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model with year-specific parameters and a temporally-smoothed model (TS model that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  14. Using a two-step matrix solution to reduce the run time in KULL's magnetic diffusion package

    Energy Technology Data Exchange (ETDEWEB)

    Brunner, T A; Kolev, T V

    2010-12-17

    Recently a Resistive Magnetohydrodynamics (MHD) package has been added to the KULL code. In order to be compatible with the underlying hydrodynamics algorithm, a new sub-zonal magnetics discretization was developed that supports arbitrary polygonal and polyhedral zones. This flexibility comes at the cost of many more unknowns per zone - approximately ten times more for a hexahedral mesh. We can eliminate some (or all, depending on the dimensionality) of the extra unknowns from the global matrix during assembly by using a Schur complement approach. This trades expensive global work for cache-friendly local work, while still allowing solution for the full system. Significant improvements in the solution time are observed for several test problems.

  15. Using a two-step matrix solution to reduce run time in Kull'S magnetic diffusion package

    International Nuclear Information System (INIS)

    Brunner, Thomas A.; Kolev, Tzanio V.

    2011-01-01

    Recently a Resistive Magnetohydrodynamics (MHD) package has been added to the KULL code. In order to be compatible with the underlying hydrodynamics algorithm, a new sub-zonal magnetics discretization was developed that supports arbitrary polygonal and polyhedral zones. This flexibility comes at the cost of many more unknowns per zone - approximately ten times more for a hexahedral mesh. We can eliminate some (or all, depending on the dimensionality) of the extra unknowns from the global matrix during assembly by using a Schur complement approach. This trades expensive global work for cache-friendly local work, while still allowing solution for the full system. Significant improvements in the solution time are observed for several test problems. (author)

  16. A one-step real-time multiplex PCR for screening Y-chromosomal microdeletions without downstream amplicon size analysis.

    Directory of Open Access Journals (Sweden)

    Viviana Kozina

    Full Text Available BACKGROUND: Y-chromosomal microdeletions (YCMD are one of the major genetic causes for non-obstructive azoospermia. Genetic testing for YCMD by multiplex polymerase chain reaction (PCR is an established method for quick and robust screening of deletions in the AZF regions of the Y-chromosome. Multiplex PCRs have the advantage of including a control gene in every reaction and significantly reducing the number of reactions needed to screen the relevant genomic markers. PRINCIPAL FINDINGS: The widely established "EAA/EMQN best practice guidelines for molecular diagnosis of Y-chromosomal microdeletions (2004" were used as a basis for designing a real-time multiplex PCR system, in which the YCMD can simply be identified by their melting points. For this reason, some AZF primers were substituted by primers for regions in their genomic proximity, and the ZFX/ZFY control primer was exchanged by the AMELX/AMELY control primer. Furthermore, we substituted the classical SybrGreen I dye by the novel and high-performing DNA-binding dye EvaGreen™ and put substantial effort in titrating the primer combinations in respect to optimal melting peak separation and peak size. SIGNIFICANCE: With these changes, we were able to develop a platform-independent and robust real-time based multiplex PCR, which makes the need for amplicon identification by electrophoretic sizing expendable. By using an open-source system for real-time PCR analysis, we further demonstrate the applicability of automated melting point and YCMD detection.

  17. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  18. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse

    2016-11-15

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers\\' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  19. Modeling Information Accumulation in Psychological Tests Using Item Response Times

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2015-01-01

    In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…

  20. Long-time asymptotics for polymerization models

    OpenAIRE

    Calvo, Juan; Doumic, Marie; Perthame, Benoît

    2017-01-01

    This study is devoted to the long-term behavior of nucleation, growth and fragmentation equations, modeling the spontaneous formation and kinetics of large polymers in a spatially homogeneous and closed environment. Such models are, for instance, commonly used in the biophysical community in order to model in vitro experiments of fibrillation. We investigate the interplay between four processes: nucleation, polymeriza-tion, depolymerization and fragmentation. We first revisit the well-known L...

  1. Time series sightability modeling of animal populations

    Science.gov (United States)

    ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.

    2018-01-01

    Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  2. Microsoft Office professional 2010 step by step

    CERN Document Server

    Cox, Joyce; Frye, Curtis

    2011-01-01

    Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom

  3. Modelling farm vulnerability to flooding: A step toward vulnerability mitigation policies appraisal

    Science.gov (United States)

    Brémond, P.; Abrami, G.; Blanc, C.; Grelot, F.

    2009-04-01

    flood. In the case of farm activities, vulnerability mitigation consists in implementing measures which can be: physical (equipment or electric power system elevation), organizational (emergency or recovery plan) or financial (insurance). These measures aim at decreasing the total damage incurred by farmers in case of flooding. For instance, if equipment is elevated, it will not suffer direct damage such as degradation. As a consequence, equipment will be available to continue production or recovery tasks, thus, avoiding indirect damage such as delays, indebtedness… The effects of these policies on farms, in particular vulnerability mitigation cannot be appraised using current methodologies mainly because they do not consider farm as a whole and focus on direct damage at the land plot scale (loss of yield). Moreover, since vulnerability mitigation policies are quite recent, few examples of implementation exist and no feedback experience can be processed. Meanwhile, decision makers and financial actors require more justification of the efficiency of public fund by economic appraisal of the projects. On the Rhône River, decision makers asked for an economic evaluation of the program of farm vulnerability mitigation they plan to implement. This implies to identify the effects of the measures to mitigate farm vulnerability, and to classify them by comparing their efficacy (avoided damage) and their cost of implementation. In this presentation, we propose and discuss a conceptual model of vulnerability at the farm scale. The modelling, in Unified Modelling Language, enabled to represent the ties between spatial, organizational and temporal dimensions, which are central to understanding of farm vulnerability and resilience to flooding. Through this modelling, we encompass three goals: To improve the comprehension of farm vulnerability and create a framework that allow discussion with experts of different disciplines as well as with local farmers; To identify data which

  4. A Multiple Time-Step Finite State Projection Algorithm for the Solution to the Chemical Master Equation

    Science.gov (United States)

    2006-11-30

    begins in state k, the initial probability distribution for the CME was written, pi(0) = δik, where δik is the Kronecker delta . Suppose now that the...initial distribution is given not by the Kronecker delta but by a vector with many non-zero elements. For example, suppose that the initial distribution is...pap-pili epigenetic switch,” Proc. FOSBE , pp. 145–148, August 2005. [16] B. Munsky and M. Khammash, “A reduced model solution for the chemical master

  5. A One-Step Real-Time Multiplex PCR for Screening Y-Chromosomal Microdeletions without Downstream Amplicon Size Analysis

    Science.gov (United States)

    Kozina, Viviana; Cappallo-Obermann, Heike; Gromoll, Jörg; Spiess, Andrej-Nikolai

    2011-01-01

    Backgound Y-chromosomal microdeletions (YCMD) are one of the major genetic causes for non-obstructive azoospermia. Genetic testing for YCMD by multiplex polymerase chain reaction (PCR) is an established method for quick and robust screening of deletions in the AZF regions of the Y-chromosome. Multiplex PCRs have the advantage of including a control gene in every reaction and significantly reducing the number of reactions needed to screen the relevant genomic markers. Principal Findings The widely established “EAA/EMQN best practice guidelines for molecular diagnosis of Y-chromosomal microdeletions (2004)” were used as a basis for designing a real-time multiplex PCR system, in which the YCMD can simply be identified by their melting points. For this reason, some AZF primers were substituted by primers for regions in their genomic proximity, and the ZFX/ZFY control primer was exchanged by the AMELX/AMELY control primer. Furthermore, we substituted the classical SybrGreen I dye by the novel and high-performing DNA-binding dye EvaGreen™ and put substantial effort in titrating the primer combinations in respect to optimal melting peak separation and peak size. Significance With these changes, we were able to develop a platform-independent and robust real-time based multiplex PCR, which makes the need for amplicon identification by electrophoretic sizing expendable. By using an open-source system for real-time PCR analysis, we further demonstrate the applicability of automated melting point and YCMD detection. PMID:21887237

  6. Two-step Laser Time-of-Flight Mass Spectrometry to Elucidate Organic Diversity in Planetary Surface Materials.

    Science.gov (United States)

    Getty, Stephanie A.; Brinckerhoff, William B.; Cornish, Timothy; Li, Xiang; Floyd, Melissa; Arevalo, Ricardo Jr.; Cook, Jamie Elsila; Callahan, Michael P.

    2013-01-01

    Laser desorption/ionization time-of-flight mass spectrometry (LD-TOF-MS) holds promise to be a low-mass, compact in situ analytical capability for future landed missions to planetary surfaces. The ability to analyze a solid sample for both mineralogical and preserved organic content with laser ionization could be compelling as part of a scientific mission pay-load that must be prepared for unanticipated discoveries. Targeted missions for this instrument capability include Mars, Europa, Enceladus, and small icy bodies, such as asteroids and comets.

  7. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research

  8. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research.

  9. Peptide-nucleotide microdroplets as a step towards a membrane-free protocell model

    Science.gov (United States)

    Koga, Shogo; Williams, David S.; Perriman, Adam W.; Mann, Stephen

    2011-09-01

    Although phospholipid bilayers are ubiquitous in modern cells, their impermeability, lack of dynamic properties, and synthetic complexity are difficult to reconcile with plausible pathways of proto-metabolism, growth and division. Here, we present an alternative membrane-free model, which demonstrates that low-molecular-weight mononucleotides and simple cationic peptides spontaneously accumulate in water into microdroplets that are stable to changes in temperature and salt concentration, undergo pH-induced cycles of growth and decay, and promote α-helical peptide secondary structure. Moreover, the microdroplets selectively sequester porphyrins, inorganic nanoparticles and enzymes to generate supramolecular stacked arrays of light-harvesting molecules, nanoparticle-mediated oxidase activity, and enhanced rates of glucose phosphorylation, respectively. Taken together, our results suggest that peptide-nucleotide microdroplets can be considered as a new type of protocell model that could be used to develop novel bioreactors, primitive artificial cells and plausible pathways to prebiotic organization before the emergence of lipid-based compartmentalization on the early Earth.

  10. Individual-based modelling of population growth and diffusion in discrete time.

    Directory of Open Access Journals (Sweden)

    Natalie Tkachenko

    Full Text Available Individual-based models (IBMs of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.

  11. Radiation environmental real-time monitoring and dispersion modeling

    International Nuclear Information System (INIS)

    Kovacik, A.; Bartokova, I.; Omelka, J.; Melicherova, T.

    2014-01-01

    The system of real-time radiation monitoring provided by MicroStep-MIS is a turn-key solution for measurement, acquisition, processing, reporting, archiving and displaying of various radiation data. At the level of measurements, the monitoring stations can be equipped with various devices from radiation probes, measuring the actual ambient gamma dose rate, to fully automated aerosol monitors, returning analysis results of natural and manmade radionuclides concentrations in the air. Using data gathered by our radiation probes RPSG-05 integrated into monitoring network of Crisis Management of the Slovak Republic and into monitoring network of Slovak Hydrometeorological Institute, we demonstrate its reliability and long-term stability of measurements. Data from RPSG-05 probes and GammaTracer probes, both of these types are used in the SHI network, are compared. The sensitivity of RPSG-05 is documented on data where changes of dose rate are caused by precipitation. Qualities of RPSG-05 probe are illustrated also on example of its use in radiation monitoring network in the United Arab Emirates. A more detailed information about radioactivity of the atmosphere can be obtained by using spectrometric detectors (e.g. scintillation detectors) which, besides gamma dose rate values, offer also a possibility to identify different radionuclides. However, this possibility is limited by technical parameters of detector like energetic resolution and detection efficiency in given geometry of measurement. A clearer information with less doubts can be obtained from aerosol monitors with a built-in silicon detector of alpha and beta particles and with an electrically cooled HPGe detector dedicated for gamma-ray spectrometry, which is performed during the sampling. Data from a complex radiation monitoring network can be used, together with meteorological data, in radiation dispersion model by MicroStep-MIS. This model serves for simulation of atmospheric propagation of radionuclides

  12. Model compilation for embedded real-time planning and diagnosis

    Science.gov (United States)

    Barrett, Anthony

    2004-01-01

    This paper describes MEXEC, an implemented micro executive that compiles a device model into an interal structure. Not only does this structure facilitate computing the most likely current device mode from n sets of sensor measurements, but it also facilitates generating an n step reconfiguration plan that is most likely not to result in reaching a target mode - if such a plan exists.

  13. Hierarchical Bayes Models for Response Time Data

    Science.gov (United States)

    Craigmile, Peter F.; Peruggia, Mario; Van Zandt, Trisha

    2010-01-01

    Human response time (RT) data are widely used in experimental psychology to evaluate theories of mental processing. Typically, the data constitute the times taken by a subject to react to a succession of stimuli under varying experimental conditions. Because of the sequential nature of the experiments there are trends (due to learning, fatigue,…

  14. Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes

    Science.gov (United States)

    Octova, A.; Sule, R.

    2018-04-01

    Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.

  15. Solvable continuous-time random walk model of the motion of tracer particles through porous media

    Science.gov (United States)

    Fouxon, Itzhak; Holzner, Markus

    2016-08-01

    We consider the continuous-time random walk (CTRW) model of tracer motion in porous medium flows based on the experimentally determined distributions of pore velocity and pore size reported by Holzner et al. [M. Holzner et al., Phys. Rev. E 92, 013015 (2015), 10.1103/PhysRevE.92.013015]. The particle's passing through one channel is modeled as one step of the walk. The step (channel) length is random and the walker's velocity at consecutive steps of the walk is conserved with finite probability, mimicking that at the turning point there could be no abrupt change of velocity. We provide the Laplace transform of the characteristic function of the walker's position and reductions for different cases of independence of the CTRW's step duration τ , length l , and velocity v . We solve our model with independent l and v . The model incorporates different forms of the tail of the probability density of small velocities that vary with the model parameter α . Depending on that parameter, all types of anomalous diffusion can hold, from super- to subdiffusion. In a finite interval of α , ballistic behavior with logarithmic corrections holds, which was observed in a previously introduced CTRW model with independent l and τ . Universality of tracer diffusion in the porous medium is considered.

  16. Development of one-step SYBR Green real-time RT-PCR for quantifying bovine viral diarrhea virus type-1 and its comparison with conventional RT-PCR

    Directory of Open Access Journals (Sweden)

    Lou Sai

    2011-07-01

    Full Text Available Abstract Background Bovine viral diarrhea virus (BVDV is a worldwide pathogen in cattle and acts as a surrogate model for hepatitis C virus (HCV. One-step real-time fluorogenic quantitative reverse transcription polymerase chain reaction (RT-PCR assay based on SYBR Green I dye has not been established for BVDV detection. This study aims to develop a quantitative one-step RT-PCR assay to detect BVDV type-1 in cell culture. Results One-step quantitative SYBR Green I RT-PCR was developed by amplifying cDNA template from viral RNA and using in vitro transcribed BVDV RNA to establish a standard curve. The assay had a detection limit as low as 100 copies/ml of BVDV RNA, a reaction efficiency of 103.2%, a correlation coefficient (R2 of 0.995, and a maximum intra-assay CV of 2.63%. It was 10-fold more sensitive than conventional RT-PCR and can quantitatively detect BVDV RNA levels from 10-fold serial dilutions of titrated viruses containing a titer from 10-1 to 10-5 TCID50, without non-specific amplification. Melting curve analysis showed no primer-dimers and non-specific products. Conclusions The one-step SYBR Green I RT-PCR is specific, sensitive and reproducible for the quantification of BVDV in cell culture. This one-step SYBR Green I RT-PCR strategy may be further optimized as a reliable assay for diagnosing and monitoring BVDV infection in animals. It may also be applied to evaluate candidate agents against HCV using BVDV cell culture model.

  17. Optimization-based control of constrained nonlinear systems with continuous-time models: Adaptive time-grid refinement algorithms

    Science.gov (United States)

    Fontes, Fernando A. C. C.; Paiva, Luís T.

    2016-10-01

    We address optimal control problems for nonlinear systems with pathwise state-constraints. These are challenging non-linear problems for which the number of discretization points is a major factor determining the computational time. Also, the location of these points has a major impact in the accuracy of the solutions. We propose an algorithm that iteratively finds an adequate time-grid to satisfy some predefined error estimate on the obtained trajectories, which is guided by information on the adjoint multipliers. The obtained results show a highly favorable comparison against the traditional equidistant-spaced time-grid methods, including the ones using discrete-time models. This way, continuous-time plant models can be directly used. The discretization procedure can be automated and there is no need to select a priori the adequate time step. Even if the optimization procedure is forced to stop in an early stage, as might be the case in real-time problems, we can still obtain a meaningful solution, although it might be a less accurate one. The extension of the procedure to a Model Predictive Control (MPC) context is proposed here. By defining a time-dependent accuracy threshold, we can generate solutions that are more accurate in the initial parts of the receding horizon, which are the most relevant for MPC.

  18. Phd study of reliability and validity: One step closer to a standardized music therapy assessment model

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl

    The paper will present a phd study concerning reliability and validity of music therapy assessment model “Assessment of Parenting Competences” (APC) in the area of families with emotionally neglected children. This study had a multiple strategy design with a philosophical base of critical realism...... and pragmatism. The fixed design for this study was a between and within groups design in testing the APCs reliability and validity. The two different groups were parents with neglected children and parents with non-neglected children. The flexible design had a multiple case study strategy specifically...... with interplay of turns between parent and child as the case under study comparing clinical and non-clinical groups and looking for differences in patterns of interaction. The flexible design informed the fixed design and led to further valuable statistical analysis. The presenter will provide an overview...

  19. OTA-Grapes: A Mechanistic Model to Predict Ochratoxin A Risk in Grapes, a Step beyond the Systems Approach

    Directory of Open Access Journals (Sweden)

    Battilani Paola

    2015-08-01

    Full Text Available Ochratoxin A (OTA is a fungal metabolite dangerous for human and animal health due to its nephrotoxic, immunotoxic, mutagenic, teratogenic and carcinogenic effects, classified by the International Agency for Research on Cancer in group 2B, possible human carcinogen. This toxin has been stated as a wine contaminant since 1996. The aim of this study was to develop a conceptual model for the dynamic simulation of the A. carbonarius life cycle in grapes along the growing season, including OTA production in berries. Functions describing the role of weather parameters in each step of the infection cycle were developed and organized in a prototype model called OTA-grapes. Modelling the influence of temperature on OTA production, it emerged that fungal strains can be shared in two different clusters, based on the dynamic of OTA production and according to the optimal temperature. Therefore, two functions were developed, and based on statistical data analysis, it was assumed that the two types of strains contribute equally to the population. Model validation was not possible because of poor OTA contamination data, but relevant differences in OTA-I, the output index of the model, were noticed between low and high risk areas. To our knowledge, this is the first attempt to assess/model A. carbonarius in order to predict the risk of OTA contamination in grapes.

  20. A three-step approach for the derivation and validation of high-performing predictive models using an operational dataset: congestive heart failure readmission case study.

    Science.gov (United States)

    AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku

    2014-05-27

    The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting

  1. Climatic change on the Gulf of Fonseca (Central America) using two-step statistical downscaling of CMIP5 model outputs

    Science.gov (United States)

    Ribalaygua, Jaime; Gaitán, Emma; Pórtoles, Javier; Monjo, Robert

    2018-05-01

    A two-step statistical downscaling method has been reviewed and adapted to simulate twenty-first-century climate projections for the Gulf of Fonseca (Central America, Pacific Coast) using Coupled Model Intercomparison Project (CMIP5) climate models. The downscaling methodology is adjusted after looking for good predictor fields for this area (where the geostrophic approximation fails and the real wind fields are the most applicable). The method's performance for daily precipitation and maximum and minimum temperature is analysed and revealed suitable results for all variables. For instance, the method is able to simulate the characteristic cycle of the wet season for this area, which includes a mid-summer drought between two peaks. Future projections show a gradual temperature increase throughout the twenty-first century and a change in the features of the wet season (the first peak and mid-summer rainfall being reduced relative to the second peak, earlier onset of the wet season and a broader second peak).

  2. The design of a real-time formative evaluation of the implementation process of lifestyle interventions at two worksites using a 7-step strategy (BRAVO@Work).

    Science.gov (United States)

    Wierenga, Debbie; Engbers, Luuk H; van Empelen, Pepijn; Hildebrandt, Vincent H; van Mechelen, Willem

    2012-08-07

    Worksite health promotion programs (WHPPs) offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions.This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital) will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews) and quantitative methods (i.e. process evaluation questionnaires) applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and after 6, 12 and 18 months. This is one of the few

  3. The design of a real-time formative evaluation of the implementation process of lifestyle interventions at two worksites using a 7-step strategy (BRAVO@Work

    Directory of Open Access Journals (Sweden)

    Wierenga Debbie

    2012-08-01

    Full Text Available Abstract Background Worksite health promotion programs (WHPPs offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions. This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. Methods and design This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews and quantitative methods (i.e. process evaluation questionnaires applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and

  4. Continuous Time Structural Equation Modeling with R Package ctsem

    Directory of Open Access Journals (Sweden)

    Charles C. Driver

    2017-04-01

    Full Text Available We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1 and time series (N = 1 data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models in the social and behavioural sciences are discrete time models. An assumption of discrete time models is that time intervals between measurements are equal, and that all subjects were assessed at the same intervals. Violations of this assumption are often ignored due to the difficulty of accounting for varying time intervals, therefore parameter estimates can be biased and the time course of effects becomes ambiguous. By using stochastic differential equations to estimate an underlying continuous process, continuous time models allow for any pattern of measurement occasions. By interfacing to OpenMx, ctsem combines the flexible specification of structural equation models with the enhanced data gathering opportunities and improved estimation of continuous time models. ctsem can estimate relationships over time for multiple latent processes, measured by multiple noisy indicators with varying time intervals between observations. Within and between effects are estimated simultaneously by modeling both observed covariates and unobserved heterogeneity. Exogenous shocks with different shapes, group differences, higher order diffusion effects and oscillating processes can all be simply modeled. We first introduce and define continuous time models, then show how to specify and estimate a range of continuous time models using ctsem.

  5. Real time natural object modeling framework

    International Nuclear Information System (INIS)

    Rana, H.A.; Shamsuddin, S.M.; Sunar, M.H.

    2008-01-01

    CG (Computer Graphics) is a key technology for producing visual contents. Currently computer generated imagery techniques are being developed and applied, particularly in the field of virtual reality applications, film production, training and flight simulators, to provide total composition of realistic computer graphic images. Natural objects like clouds are an integral feature of the sky without them synthetic outdoor scenes seem unrealistic. Modeling and animating such objects is a difficult task. Most systems are difficult to use, as they require adjustment of numerous, complex parameters and are non-interactive. This paper presents an intuitive, interactive system to artistically model, animate, and render visually convincing clouds using modern graphics hardware. A high-level interface models clouds through the visual use of cubes. Clouds are rendered by making use of hardware accelerated API -OpenGL. The resulting interactive design and rendering system produces perceptually convincing cloud models that can be used in any interactive system. (author)

  6. Modeling accuracy as a function of response time with the generalized linear mixed effects model.

    Science.gov (United States)

    Davidson, D J; Martin, A E

    2013-09-01

    In psycholinguistic studies using error rates as a response measure, response times (RT) are most often analyzed independently of the error rate, although it is widely recognized that they are related. In this paper we present a mixed effects logistic regression model for the error rate that uses RT as a trial-level fixed- and random-effect regression input. Production data from a translation-recall experiment are analyzed as an example. Several model comparisons reveal that RT improves the fit of the regression model for the error rate. Two simulation studies then show how the mixed effects regression model can identify individual participants for whom (a) faster responses are more accurate, (b) faster responses are less accurate, or (c) there is no relation between speed and accuracy. These results show that this type of model can serve as a useful adjunct to traditional techniques, allowing psycholinguistic researchers to examine more closely the relationship between RT and accuracy in individual subjects and better account for the variability which may be present, as well as a preliminary step to more advanced RT-accuracy modeling. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Adaptive Modeling and Real-Time Simulation

    Science.gov (United States)

    1984-01-01

    34 Artificial Inteligence , Vol. 13, pp. 27-39 (1980). Describes circumscription which is just the assumption that everything that is known to have a particular... Artificial Intelligence Truth Maintenance Planning Resolution Modeling Wcrld Models ~ .. ~2.. ASSTR AT (Coninue n evrse sieIf necesaran Identfy by...represents a marriage of (1) the procedural-network st, planning technology developed in artificial intelligence with (2) the PERT/CPM technology developed in

  8. A congested and dwell time dependent transit corridor assignment model

    OpenAIRE

    Alonso Oreña, Borja; Muñoz, Juan Carlos; Ibeas Portilla, Ángel; Moura Berodia, José Luis

    2016-01-01

    This research proposes an equilibrium assignment model for congested public transport corridors in urban areas. In this model, journey times incorporate the effect of bus queuing on travel times and boarding and alighting passengers on dwell times at stops. The model also considers limited bus capacity leading to longer waiting times and more uncomfortable journeys. The proposed model is applied to an example network, and the results are compared with those obtained in a recent study. This is...

  9. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying

    DEFF Research Database (Denmark)

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F.C.; Corver, Jos

    2018-01-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature Ts and chamber pressure Pc. Mechanistic modelling of the primary drying step...

  10. Analysis of a risk prevention document using dependability techniques: a first step towards an effectiveness model

    Science.gov (United States)

    Ferrer, Laetitia; Curt, Corinne; Tacnet, Jean-Marc

    2018-04-01

    Major hazard prevention is a main challenge given that it is specifically based on information communicated to the public. In France, preventive information is notably provided by way of local regulatory documents. Unfortunately, the law requires only few specifications concerning their content; therefore one can question the impact on the general population relative to the way the document is concretely created. Ergo, the purpose of our work is to propose an analytical methodology to evaluate preventive risk communication document effectiveness. The methodology is based on dependability approaches and is applied in this paper to the Document d'Information Communal sur les Risques Majeurs (DICRIM; in English, Municipal Information Document on Major Risks). DICRIM has to be made by mayors and addressed to the public to provide information on major hazards affecting their municipalities. An analysis of law compliance of the document is carried out thanks to the identification of regulatory detection elements. These are applied to a database of 30 DICRIMs. This analysis leads to a discussion on points such as usefulness of the missing elements. External and internal function analysis permits the identification of the form and content requirements and service and technical functions of the document and its components (here its sections). Their results are used to carry out an FMEA (failure modes and effects analysis), which allows us to define the failure and to identify detection elements. This permits the evaluation of the effectiveness of form and content of each components of the document. The outputs are validated by experts from the different fields investigated. Those results are obtained to build, in future works, a decision support model for the municipality (or specialised consulting firms) in charge of drawing up documents.

  11. Time-stepping approach for solving upper-bound problems: Application to two-dimensional Rayleigh-Bénard convection

    Science.gov (United States)

    Wen, Baole; Chini, Gregory P.; Kerswell, Rich R.; Doering, Charles R.

    2015-10-01

    An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Bénard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Bénard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents α and β in the presumed Nu˜PrαRaβ scaling relation. The computations clearly show that for Ra≤1010 at fixed L =2 √{2 },Nu≤0.106 Pr0Ra5/12 , which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime.

  12. A two-step approach for fluidized bed granulation in pharmaceutical processing: Assessing different models for design and control.

    Science.gov (United States)

    Ming, Liangshan; Li, Zhe; Wu, Fei; Du, Ruofei; Feng, Yi

    2017-01-01

    Various modeling techniques were used to understand fluidized bed granulation using a two-step approach. First, Plackett-Burman design (PBD) was used to identify the high-risk factors. Then, Box-Behnken design (BBD) was used to analyze and optimize those high-risk factors. The relationship between the high-risk input variables (inlet air temperature X1, binder solution rate X3, and binder-to-powder ratio X5) and quality attributes (flowability Y1, temperature Y2, moisture content Y3, aggregation index Y4, and compactability Y5) of the process was investigated using response surface model (RSM), partial least squares method (PLS) and artificial neural network of multilayer perceptron (MLP). The morphological study of the granules was also investigated using a scanning electron microscope. The results showed that X1, X3, and X5 significantly affected the properties of granule. The RSM, PLS and MLP models were found to be useful statistical analysis tools for a better mechanistic understanding of granulation. The statistical analysis results showed that the RSM model had a better ability to fit the quality attributes of granules compared to the PLS and MLP models. Understanding the effect of process parameters on granule properties provides the basis for modulating the granulation parameters and optimizing the product performance at the early development stage of pharmaceutical products.

  13. A two-step approach for fluidized bed granulation in pharmaceutical processing: Assessing different models for design and control.

    Directory of Open Access Journals (Sweden)

    Liangshan Ming

    Full Text Available Various modeling techniques were used to understand fluidized bed granulation using a two-step approach. First, Plackett-Burman design (PBD was used to identify the high-risk factors. Then, Box-Behnken design (BBD was used to analyze and optimize those high-risk factors. The relationship between the high-risk input variables (inlet air temperature X1, binder solution rate X3, and binder-to-powder ratio X5 and quality attributes (flowability Y1, temperature Y2, moisture content Y3, aggregation index Y4, and compactability Y5 of the process was investigated using response surface model (RSM, partial least squares method (PLS and artificial neural network of multilayer perceptron (MLP. The morphological study of the granules was also investigated using a scanning electron microscope. The results showed that X1, X3, and X5 significantly affected the properties of granule. The RSM, PLS and MLP models were found to be useful statistical analysis tools for a better mechanistic understanding of granulation. The statistical analysis results showed that the RSM model had a better ability to fit the quality attributes of granules compared to the PLS and MLP models. Understanding the effect of process parameters on granule properties provides the basis for modulating the granulation parameters and optimizing the product performance at the early development stage of pharmaceutical products.

  14. A two-step approach for fluidized bed granulation in pharmaceutical processing: Assessing different models for design and control

    Science.gov (United States)

    Ming, Liangshan; Li, Zhe; Wu, Fei; Du, Ruofei; Feng, Yi

    2017-01-01

    Various modeling techniques were used to understand fluidized bed granulation using a two-step approach. First, Plackett-Burman design (PBD) was used to identify the high-risk factors. Then, Box-Behnken design (BBD) was used to analyze and optimize those high-risk factors. The relationship between the high-risk input variables (inlet air temperature X1, binder solution rate X3, and binder-to-powder ratio X5) and quality attributes (flowability Y1, temperature Y2, moisture content Y3, aggregation index Y4, and compactability Y5) of the process was investigated using response surface model (RSM), partial least squares method (PLS) and artificial neural network of multilayer perceptron (MLP). The morphological study of the granules was also investigated using a scanning electron microscope. The results showed that X1, X3, and X5 significantly affected the properties of granule. The RSM, PLS and MLP models were found to be useful statistical analysis tools for a better mechanistic understanding of granulation. The statistical analysis results showed that the RSM model had a better ability to fit the quality attributes of granules compared to the PLS and MLP models. Understanding the effect of process parameters on granule properties provides the basis for modulating the granulation parameters and optimizing the product performance at the early development stage of pharmaceutical products. PMID:28662115

  15. Real-Time Vocal Tract Modelling

    Directory of Open Access Journals (Sweden)

    K. Benkrid

    2008-03-01

    Full Text Available To date, most speech synthesis techniques have relied upon the representation of the vocal tract by some form of filter, a typical example being linear predictive coding (LPC. This paper describes the development of a physiologically realistic model of the vocal tract using the well-established technique of transmission line modelling (TLM. This technique is based on the principle of wave scattering at transmission line segment boundaries and may be used in one, two, or three dimensions. This work uses this technique to model the vocal tract using a one-dimensional transmission line. A six-port scattering node is applied in the region separating the pharyngeal, oral, and the nasal parts of the vocal tract.

  16. The Effect of Phosphoric Acid Pre-etching Times on Bonding Performance and Surface Free Energy with Single-step Self-etch Adhesives.

    Science.gov (United States)

    Tsujimoto, A; Barkmeier, W W; Takamizawa, T; Latta, M A; Miyazaki, M

    2016-01-01

    The purpose of this study was to evaluate the effect of phosphoric acid pre-etching times on shear bond strength (SBS) and surface free energy (SFE) with single-step self-etch adhesives. The three single-step self-etch adhesives used were: 1) Scotchbond Universal Adhesive (3M ESPE), 2) Clearfil tri-S Bond (Kuraray Noritake Dental), and 3) G-Bond Plus (GC). Two no pre-etching groups, 1) untreated enamel and 2) enamel surfaces after ultrasonic cleaning with distilled water for 30 seconds to remove the smear layer, were prepared. There were four pre-etching groups: 1) enamel surfaces were pre-etched with phosphoric acid (Etchant, 3M ESPE) for 3 seconds, 2) enamel surfaces were pre-etched for 5 seconds, 3) enamel surfaces were pre-etched for 10 seconds, and 4) enamel surfaces were pre-etched for 15 seconds. Resin composite was bonded to the treated enamel surface to determine SBS. The SFEs of treated enamel surfaces were determined by measuring the contact angles of three test liquids. Scanning electron microscopy was used to examine the enamel surfaces and enamel-adhesive interface. The specimens with phosphoric acid pre-etching showed significantly higher SBS and SFEs than the specimens without phosphoric acid pre-etching regardless of the adhesive system used. SBS and SFEs did not increase for phosphoric acid pre-etching times over 3 seconds. There were no significant differences in SBS and SFEs between the specimens with and without a smear layer. The data suggest that phosphoric acid pre-etching of ground enamel improves the bonding performance of single-step self-etch adhesives, but these bonding properties do not increase for phosphoric acid pre-etching times over 3 seconds.

  17. Modeling Venus-Like Worlds Through Time

    OpenAIRE

    Way, M. J.; Del Genio, Anthony; Amundsen, David S.

    2018-01-01

    We explore the atmospheric and surface history of a hypothetical paleo-Venus climate using a 3-D General Circulation Model. We constrain our model with the in-situ and remote sensing Venus data available today. Given that Venus and Earth are believed to be similar geochemically some aspects of Earth's history are also utilized. We demonstrate that it is possible for ancient Venus and Venus-like exoplanetary worlds to exist within the liquid water habitable zone with insolations up to nearly 2...

  18. Axiomatics of uniform space-time models

    International Nuclear Information System (INIS)

    Levichev, A.V.

    1983-01-01

    The mathematical statement of space-time axiomatics of the special theory of relativity is given; it postulates that the space-time M is the binding single boundary Hausedorf local-compact four-dimensional topological space with the given order. The theorem is proved: if the invariant order in the four-dimensional group M is given by the semi-group P, which contingency K contains inner points , then M is commutative. The analogous theorem is correct for the group of two and three dimensionalities

  19. Continuous time modeling of panel data by means of SEM

    NARCIS (Netherlands)

    Oud, J.H.L.; Delsing, M.J.M.H.; Montfort, C.A.G.M.; Oud, J.H.L.; Satorra, A.

    2010-01-01

    After a brief history of continuous time modeling and its implementation in panel analysis by means of structural equation modeling (SEM), the problems of discrete time modeling are discussed in detail. This is done by means of the popular cross-lagged panel design. Next, the exact discrete model

  20. A continuous-time control model on production planning network ...

    African Journals Online (AJOL)

    A continuous-time control model on production planning network. DEA Omorogbe, MIU Okunsebor. Abstract. In this paper, we give a slightly detailed review of Graves and Hollywood model on constant inventory tactical planning model for a job shop. The limitations of this model are pointed out and a continuous time ...

  1. Time-resolved spectral characterization of ring cavity surface emitting and ridge-type distributed feedback quantum cascade lasers by step-scan FT-IR spectroscopy.

    Science.gov (United States)

    Brandstetter, Markus; Genner, Andreas; Schwarzer, Clemens; Mujagic, Elvis; Strasser, Gottfried; Lendl, Bernhard

    2014-02-10

    We present the time-resolved comparison of pulsed 2nd order ring cavity surface emitting (RCSE) quantum cascade lasers (QCLs) and pulsed 1st order ridge-type distributed feedback (DFB) QCLs using a step-scan Fourier transform infrared (FT-IR) spectrometer. Laser devices were part of QCL arrays and fabricated from the same laser material. Required grating periods were adjusted to account for the grating order. The step-scan technique provided a spectral resolution of 0.1 cm(-1) and a time resolution of 2 ns. As a result, it was possible to gain information about the tuning behavior and potential mode-hops of the investigated lasers. Different cavity-lengths were compared, including 0.9 mm and 3.2 mm long ridge-type and 0.97 mm (circumference) ring-type cavities. RCSE QCLs were found to have improved emission properties in terms of line-stability, tuning rate and maximum emission time compared to ridge-type lasers.

  2. Modelling biological pathway dynamics with Timed Automata

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Urquidi Camacho, R.A.; Wanders, B.; van der Vet, P.E.; Karperien, Hermanus Bernardus Johannes; Langerak, Romanus; van de Pol, Jan Cornelis; Post, Janine Nicole

    2012-01-01

    When analysing complex interaction networks occurring in biological cells, a biologist needs computational support in order to understand the effects of signalling molecules (e.g. growth factors, drugs). ANIMO (Analysis of Networks with Interactive MOdelling) is a tool that allows the user to create

  3. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    The effect of receiver coil alignment errors δ on the response of electromagnetic measurements in a layered earth model is studied. The statistics of generalized least square inverse was employed to analyzed the errors on three different geophysical applications. The following results were obtained: (i) The FEM ellipiticity is ...

  4. Null Models for Everyone: A Two-Step Approach to Teaching Null Model Analysis of Biological Community Structure

    Science.gov (United States)

    McCabe, Declan J.; Knight, Evelyn J.

    2016-01-01

    Since being introduced by Connor and Simberloff in response to Diamond's assembly rules, null model analysis has been a controversial tool in community ecology. Despite being commonly used in the primary literature, null model analysis has not featured prominently in general textbooks. Complexity of approaches along with difficulty in interpreting…

  5. Numerical time integration for air pollution models

    NARCIS (Netherlands)

    J.G. Verwer (Jan); W. Hundsdorfer (Willem); J.G. Blom (Joke)

    1998-01-01

    textabstractDue to the large number of chemical species and the three space dimensions, off-the-shelf stiff ODE integrators are not feasible for the numerical time integration of stiff systems of advection-diffusion-reaction equations [ fracpar{c{t + nabla cdot left( vu{u c right) = nabla cdot left(

  6. Modeling Time Series Data for Supervised Learning

    Science.gov (United States)

    Baydogan, Mustafa Gokce

    2012-01-01

    Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…

  7. Time-Dependent Networks as Models to Achieve Fast Exact Time-Table Queries

    DEFF Research Database (Denmark)

    Brodal, Gert Stølting; Jacob, Rico

    2003-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  8. Time-dependent Networks as Models to Achieve Fast Exact Time-table Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jacob, Rico

    2001-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  9. Stepwise hydrogeological modeling and groundwater flow analysis on site scale (The former part of the step 3)

    International Nuclear Information System (INIS)

    Onoe, Hironori; Saegusa, Hiromitsu; Endo, Yoshinobu

    2005-07-01

    One of the main goals of the Mizunami Underground Research Laboratory Project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. To achieve this goal, a variety of investigations are being conducted using an iterative approach. In this study, hydrogeological modeling and groundwater flow analyses have been carried out using the data from surface-based investigations at the former part of Step 3 (deep borehole investigations without vertical seismic profiling investigations), in order to synthesize the investigation results, to evaluate the uncertainty of the hydrogeological model, and to specify items for further investigation. The results of this study are summarized as follows: 1) The uncertainty of hydrogeological model of the site scale is decreased as stepwise research; 2) Borehole investigations combined with hydraulic monitoring are useful for decreasing the uncertainty of hydrogeological model; The main items specified for further investigations are summarized as follows: 1) Trend, length, and hydraulic parameters of faults confirmed in the MIU construction site; 2) Shape of boundary of geological layer, and hydraulic parameters of rock; 3) Hydraulic head distribution of deep underground. (author)

  10. Grade Distribution Modeling within the Bauxite Seams of the Wachangping Mine, China, Using a Multi-Step Interpolation Algorithm

    Directory of Open Access Journals (Sweden)

    Shaofeng Wang

    2017-05-01

    Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.

  11. Time Series Modeling for Structural Response Prediction

    Science.gov (United States)

    1988-11-14

    results for 2nd mode. 69 5. 3DOF simulated data. 71 6. Experimental data. 72 7. Simulated data. 75 8. MPEM estimates for MDOF data with closely spaced...vector Ssteering matrix of residual time series 2DOF Two-degree-of-freedom 2LS Two-stage Least Squares Method 3DOF Three-degree-of-freedom x SUMMARY A...70 Table 5: 3DOF Simulated Data (fd= 1 ,10 ,25 ; C=.01,.0l,.0l; Amp=1,l,l; 256 pts, f,=2000 Hz) Algorithm grv noise higher mode grv, 4th mode, bias 40

  12. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  13. Time Resolved Stereo Particle Image Velocimetry Measurements of the Instabilities Downstream of a Backward-Facing Step in a Swept-Wing Boundary Layer

    Science.gov (United States)

    Eppink, Jenna L.; Yao, Chung-Sheng

    2017-01-01

    Time-resolved particle image velocimetry (TRPIV) measurements are performed down-stream of a swept backward-facing step, with a height of 49% of the boundary-layer thickness. The results agree well qualitatively with previously reported hotwire measurements, though the amplitudes of the fluctuating components measured using TRPIV are higher. Nonetheless, the low-amplitude instabilities in the flow are fairly well resolved using TR- PIV. Proper orthogonal decomposition is used to study the development of the traveling cross flow and Tollmien-Schlichting (TS) instabilities downstream of the step and to study how they interact to form the large velocity spikes that ultimately lead to transition. A secondary mode within the traveling cross flow frequency band develops with a wavelength close to that of the stationary cross flow instability, so that at a certain point in the phase, it causes an increase in the spanwise modulation initially caused by the stationary cross flow mode. This increased modulation leads to an increase in the amplitude of the TS mode, which, itself, is highly modulated through interactions with the stationary cross flow. When the traveling cross flow and TS modes align in time and space, the large velocity spikes occur. Thus, these three instabilities, which are individually of low amplitude when the spikes start to occur (U'rms/Ue <0.03), interact and combine to cause a large flow disturbance that eventually leads to transition.

  14. With string model to time series forecasting

    Science.gov (United States)

    Pinčák, Richard; Bartoš, Erik

    2015-10-01

    Overwhelming majority of econometric models applied on a long term basis in the financial forex market do not work sufficiently well. The reason is that transaction costs and arbitrage opportunity are not included, as this does not simulate the real financial markets. Analyses are not conducted on the non equidistant date but rather on the aggregate date, which is also not a real financial case. In this paper, we would like to show a new way how to analyze and, moreover, forecast financial market. We utilize the projections of the real exchange rate dynamics onto the string-like topology in the OANDA market. The latter approach allows us to build the stable prediction models in trading in the financial forex market. The real application of the multi-string structures is provided to demonstrate our ideas for the solution of the problem of the robust portfolio selection. The comparison with the trend following strategies was performed, the stability of the algorithm on the transaction costs for long trade periods was confirmed.

  15. Long-term prediction of chaotic time series with multi-step prediction horizons by a neural network with Levenberg-Marquardt learning algorithm

    International Nuclear Information System (INIS)

    Mirzaee, Hossein

    2009-01-01

    The Levenberg-Marquardt learning algorithm is applied for training a multilayer perception with three hidden layer each with ten neurons in order to carefully map the structure of chaotic time series such as Mackey-Glass time series. First the MLP network is trained with 1000 data, and then it is tested with next 500 data. After that the trained and tested network is applied for long-term prediction of next 120 data which come after test data. The prediction is such a way that, the first inputs to network for prediction are the four last data of test data, then the predicted value is shifted to the regression vector which is the input to the network, then after first four-step of prediction, the input regression vector to network is fully predicted values and in continue, each predicted data is shifted to input vector for subsequent prediction.

  16. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  17. A new G-M counter dead time model

    International Nuclear Information System (INIS)

    Lee, S.H.; Gardner, R.P.

    2000-01-01

    A hybrid G-M counter dead time model was derived by combining the idealized paralyzable and non-paralyzable models. The new model involves two parameters, which are the paralyzable and non-paralyzable dead times. The dead times used in the model are very closely related to the physical dead time of the G-M tube and its resolving time. To check the validity of the model, the decaying source method with 56 Mn was used. The corrected counting rates by the new G-M dead time model were compared with the observed counting rates obtained from the measurement and gave very good agreement within 5% up to 7x10 4 counts/s for a G-M tube with a dead time of about 300 μs

  18. A Model for Industrial Real-Time Systems

    DEFF Research Database (Denmark)

    Bin Waez, Md Tawhid; Wasowski, Andrzej; Dingel, Juergen

    2015-01-01

    Introducing automated formal methods for large industrial real-time systems is an important research challenge. We propose timed process automata (TPA) for modeling and analysis of time-critical systems which can be open, hierarchical, and dynamic. The model offers two essential features for large...... industrial systems: (i) compositional modeling with reusable designs for different contexts, and (ii) an automated state-space reduction technique. Timed process automata model dynamic networks of continuous-time communicating control processes which can activate other processes. We show how to automatically...

  19. Preference as a Function of Active Interresponse Times: A Test of the Active Time Model

    Science.gov (United States)

    Misak, Paul; Cleaveland, J. Mark

    2011-01-01

    In this article, we describe a test of the active time model for concurrent variable interval (VI) choice. The active time model (ATM) suggests that the time since the most recent response is one of the variables controlling choice in concurrent VI VI schedules of reinforcement. In our experiment, pigeons were trained in a multiple concurrent…

  20. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.