Molecular dynamics based enhanced sampling of collective variables with very large time steps
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Displacement in the parameter space versus spurious solution of discretization with large time step
International Nuclear Information System (INIS)
Mendes, Eduardo; Letellier, Christophe
2004-01-01
In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics
The large discretization step method for time-dependent partial differential equations
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Energy Technology Data Exchange (ETDEWEB)
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Directory of Open Access Journals (Sweden)
Emily Lyle
2012-03-01
Full Text Available Indo-European mythology is known only through written records but it needs to be understood in terms of the preliterate oral-cultural context in which it was rooted. It is proposed that this world was conceptually organized through a memory-capsule consisting of the current generation and the three before it, and that there was a system of alternate generations with each generation taking a step into the future under the leadership of a white or red king.
Time step MOTA thermostat simulation
International Nuclear Information System (INIS)
Guthrie, G.L.
1978-09-01
The report details the logic, program layout, and operating procedures for the time-step MOTA (Materials Open Test Assembly) thermostat simulation program known as GYRD. It will enable prospective users to understand the operation of the program, run it, and interpret the results. The time-step simulation analysis was the approach chosen to determine the maximum value gain that could be used to minimize steady temperature offset without risking undamped thermal oscillations. The advantage of the GYRD program is that it directly shows hunting, ringing phenomenon, and similar events. Programs BITT and CYLB are faster, but do not directly show ringing time
Navon, I. M.; Yu, Jian
A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.
Agapitov, O. V.; Mozer, F.; Artemyev, A.; Krasnoselskikh, V.; Lejosne, S.
2014-12-01
A huge number of different non-linear structures (double layers, electron holes, non-linear whistlers, etc) have been observed by the electric field experiment on the Van Allen Probes in conjunction with relativistic electron acceleration in the Earth's outer radiation belt. These structures, found as short duration (~0.1 msec) quasi-periodic bursts of electric field in the high time resolution electric field waveform, have been called Time Domain Structures (TDS). They can quite effectively interact with radiation belt electrons. Due to the trapping of electrons into these non-linear structures, they are accelerated up to ~10 keV and their pitch angles are changed, especially for low energies (˜1 keV). Large amplitude electric field perturbations cause non-linear resonant trapping of electrons into the effective potential of the TDS and these electrons are then accelerated in the non-homogeneous magnetic field. These locally accelerated electrons create the "seed population" of several keV electrons that can be accelerated by coherent, large amplitude, upper band whistler waves to MeV energies in this two step acceleration process. All the elements of this chain acceleration mechanism have been observed by the Van Allen Probes.
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
Grief: Difficult Times, Simple Steps.
Waszak, Emily Lane
This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…
Symplectic integrators with adaptive time steps
Richardson, A. S.; Finn, J. M.
2012-01-01
In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max
2016-11-25
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Newmark local time stepping on high-performance computing architectures
Energy Technology Data Exchange (ETDEWEB)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)
2017-04-01
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Newmark local time stepping on high-performance computing architectures
Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf
2016-01-01
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Time to pause before the next step
International Nuclear Information System (INIS)
Siemon, R.E.
1998-01-01
Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy's development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ''demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...'' which are the important ingredients of ITER's objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here
A parallel nearly implicit time-stepping scheme
Botchev, Mike A.; van der Vorst, Henk A.
2001-01-01
Across-the-space parallelism still remains the most mature, convenient and natural way to parallelize large scale problems. One of the major problems here is that implicit time stepping is often difficult to parallelize due to the structure of the system. Approximate implicit schemes have been suggested to circumvent the problem. These schemes have attractive stability properties and they are also very well parallelizable. The purpose of this article is to give an overall assessment of the pa...
A large number of stepping motor network construction by PLC
Mei, Lin; Zhang, Kai; Hongqiang, Guo
2017-11-01
In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.
High-resolution seismic wave propagation using local time stepping
Peter, Daniel
2017-03-13
High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.
Diffeomorphic image registration with automatic time-step adjustment
DEFF Research Database (Denmark)
Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst
2015-01-01
In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required to r...... accuracy as a fixed time-step scheme however at a much less computational cost....
A simple, compact, and rigid piezoelectric step motor with large step size
Wang, Qi; Lu, Qingyou
2009-08-01
We present a novel piezoelectric stepper motor featuring high compactness, rigidity, simplicity, and any direction operability. Although tested in room temperature, it is believed to work in low temperatures, owing to its loose operation conditions and large step size. The motor is implemented with a piezoelectric scanner tube that is axially cut into almost two halves and clamp holds a hollow shaft inside at both ends via the spring parts of the shaft. Two driving voltages that singly deform the two halves of the piezotube in one direction and recover simultaneously will move the shaft in the opposite direction, and vice versa.
Multiple time step integrators in ab initio molecular dynamics
International Nuclear Information System (INIS)
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-01-01
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy
An explicit multi-time-stepping algorithm for aerodynamic flows
Niemann-Tuitman, B.E.; Veldman, A.E.P.
1997-01-01
An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.
Combating cancer one step at a time
Directory of Open Access Journals (Sweden)
R.N Sugitha Nadarajah
2016-10-01
widespread consequences, not only in a medical sense but also socially and economically,” says Dr. Abdel-Rahman. “We need to put in every effort to combat this fatal disease,” he adds.Tackling the spread of cancer and the increase in the number of cases reported every year is not without its challenges, he asserts. “I see the key challenges as the unequal availability of cancer treatments worldwide, the increasing cost of cancer treatment, and the increased median age of the population in many parts of the world, which carries with it a consequent increase in the risk of certain cancers,” he says. “We need to reassess the current pace and orientation of cancer research because, with time, cancer research is becoming industry-oriented rather than academia-oriented — which, in my view, could be very dangerous to the future of cancer research,” adds Dr. Abdel-Rahman. “Governments need to provide more research funding to improve the outcome of cancer patients,” he explains.His efforts and hard work have led to him receiving a number of distinguished awards, namely the UICC International Cancer Technology Transfer (ICRETT fellowship in 2014 at the Investigational New Drugs Unit in the European Institute of Oncology, Milan, Italy; EACR travel fellowship in 2015 at The Christie NHS Foundation Trust, Manchester, UK; and also several travel grants to Ireland, Switzerland, Belgium, Spain, and many other countries where he attended medical conferences. Dr. Abdel-Rahman is currently engaged in a project to establish a clinical/translational cancer research center at his institute, which seeks to incorporate various cancer-related disciplines in order to produce a real bench-to-bedside practice, hoping that it would “change research that may help shape the future of cancer therapy”.Dr. Abdel-Rahman is also an active founding member of the clinical research unit at his institute and is a representative to the prestigious European Organization for Research and
Aggressive time step selection for the time asymptotic velocity diffusion problem
International Nuclear Information System (INIS)
Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.
1984-12-01
An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large
An explicit multi-time-stepping algorithm for aerodynamic flows
Niemann-Tuitman, B.E.; Veldman, A.E.P.
1997-01-01
An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for
High-resolution seismic wave propagation using local time stepping
Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul
2017-01-01
High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step
The effects of age and step length on joint kinematics and kinetics of large out-and-back steps.
Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B
2008-06-01
Maximum step length (MSL) is a clinical test that has been shown to correlate with age, various measures of fall risk, and knee and hip joint extension speed, strength, and power capacities, but little is known about the kinematics and kinetics of the large out-and-back step utilized. Body motions and ground reaction forces were recorded for 11 unimpaired younger and 10 older women while attaining maximum step length. Joint kinematics and kinetics were calculated using inverse dynamics. The effects of age group and step length on the biomechanics of these large out-and-back steps were determined. Maximum step length was 40% greater in the younger than in the older women (P<0.0001). Peak knee and hip, but not ankle, angle, velocity, moment, and power were generally greater for younger women and longer steps. After controlling for age group, step length generally explained significant additional variance in hip and torso kinematics and kinetics (incremental R2=0.09-0.37). The young reached their peak knee extension moment immediately after landing of the step out, while the old reached their peak knee extension moment just before the return step liftoff (P=0.03). Maximum step length is strongly associated with hip kinematics and kinetics. Delays in peak knee extension moment that appear to be unrelated to step length, may indicate a reduced ability of older women to rapidly apply force to the ground with the stepping leg and thus arrest the momentum of a fall.
The importance of time-stepping errors in ocean models
Williams, P. D.
2011-12-01
Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.
Time step size selection for radiation diffusion calculations
International Nuclear Information System (INIS)
Rider, W.J.; Knoll, D.A.
1999-01-01
The purpose of this note is to describe a time step control technique as applied to radiation diffusion. Standard practice only provides a heuristic criteria related to the relative change in the dependent variables. The authors propose an alternative based on relatively simple physical principles. This time step control applies to methods of solution that are unconditionally stable and converges nonlinearities within a time step in the governing equations. Commonly, nonlinearities in the governing equations are evaluated using existing (old time) data. The authors refer to this as the semi-implicit (SI) method. When a method converges nonlinearities within a time step, the entire governing equation including all nonlinearities is self-consistently evaluated using advance time data (with appropriate time centering for accuracy)
Solving large scale structure in ten easy steps with COLA
Energy Technology Data Exchange (ETDEWEB)
Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Positivity-preserving dual time stepping schemes for gas dynamics
Parent, Bernard
2018-05-01
A new approach at discretizing the temporal derivative of the Euler equations is here presented which can be used with dual time stepping. The temporal discretization stencil is derived along the lines of the Cauchy-Kowalevski procedure resulting in cross differences in spacetime but with some novel modifications which ensure the positivity of the discretization coefficients. It is then shown that the so-obtained spacetime cross differences result in changes to the wave speeds and can thus be incorporated within Roe or Steger-Warming schemes (with and without reconstruction-evolution) simply by altering the eigenvalues. The proposed approach is advantaged over alternatives in that it is positivity-preserving for the Euler equations. Further, it yields monotone solutions near discontinuities while exhibiting a truncation error in smooth regions less than the one of the second- or third-order accurate backward-difference-formula (BDF) for either small or large time steps. The high resolution and positivity preservation of the proposed discretization stencils are independent of the convergence acceleration technique which can be set to multigrid, preconditioning, Jacobian-free Newton-Krylov, block-implicit, etc. Thus, the current paper also offers the first implicit integration of the time-accurate Euler equations that is positivity-preserving in the strict sense (that is, the density and temperature are guaranteed to remain positive). This is in contrast to all previous positivity-preserving implicit methods which only guaranteed the positivity of the density, not of the temperature or pressure. Several stringent reacting and inert test cases confirm the positivity-preserving property of the proposed method as well as its higher resolution and higher computational efficiency over other second-order and third-order implicit temporal discretization strategies.
STEP flight experiments Large Deployable Reflector (LDR) telescope
Runge, F. C.
1984-01-01
Flight testing plans for a large deployable infrared reflector telescope to be tested on a space platform are discussed. Subsystem parts, subassemblies, and whole assemblies are discussed. Assurance of operational deployability, rigidization, alignment, and serviceability will be sought.
An adaptive time-stepping strategy for solving the phase field crystal model
International Nuclear Information System (INIS)
Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua
2013-01-01
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations
Multi-time-step domain coupling method with energy control
DEFF Research Database (Denmark)
Mahjoubi, N.; Krenk, Steen
2010-01-01
the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....
Sharing Steps in the Workplace: Changing Privacy Concerns Over Time
DEFF Research Database (Denmark)
Jensen, Nanna Gorm; Shklovski, Irina
2016-01-01
study of a Danish workplace participating in a step counting campaign. We find that concerns of employees who choose to participate and those who choose not to differ. Moreover, privacy concerns of participants develop and change over time. Our findings challenge the assumption that consumers...
Studies on steps affecting tritium residence time in solid blanket
International Nuclear Information System (INIS)
Tanaka, Satoru
1987-01-01
For the self sustaining of CTR fuel cycle, the effective tritium recovery from blankets is essential. This means that not only tritium breeding ratio must be larger than 1.0, but also high recovering speed is required for the short residence time of tritium in blankets. Short residence time means that the tritium inventory in blankets is small. In this paper, the tritium residence time and tritium inventory in a solid blanket are modeled by considering the steps constituting tritium release. Some of these tritium migration processes were experimentally evaluated. The tritium migration steps in a solid blanket using sintered breeding materials consist of diffusion in grains, desorption at grain edges, diffusion and permeation through grain boundaries, desorption at particle edges, diffusion and percolation through interconnected pores to purging stream, and convective mass transfer to stream. Corresponding to these steps, diffusive, soluble, adsorbed and trapped tritium inventories and the tritium in gas phase are conceivable. The code named TTT was made for calculating these tritium inventories and the residence time of tritium. An example of the results of calculation is shown. The blanket is REPUTER-1, which is the conceptual design of a commercial reversed field pinch fusion reactor studied at the University of Tokyo. The experimental studies on the migration steps of tritium are reported. (Kako, I.)
Plante, Ianik; Devroye, Luc
2017-10-01
Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
Särkimäki, K.; Hirvijoki, E.; Terävä, J.
2018-01-01
We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.
[Collaborative application of BEPS at different time steps.
Lu, Wei; Fan, Wen Yi; Tian, Tian
2016-09-01
BEPSHourly is committed to simulate the ecological and physiological process of vegetation at hourly time steps, and is often applied to analyze the diurnal change of gross primary productivity (GPP), net primary productivity (NPP) at site scale because of its more complex model structure and time-consuming solving process. However, daily photosynthetic rate calculation in BEPSDaily model is simpler and less time-consuming, not involving many iterative processes. It is suitable for simulating the regional primary productivity and analyzing the spatial distribution of regional carbon sources and sinks. According to the characteristics and applicability of BEPSDaily and BEPSHourly models, this paper proposed a method of collaborative application of BEPS at daily and hourly time steps. Firstly, BEPSHourly was used to optimize the main photosynthetic parameters: the maximum rate of carboxylation (V c max ) and the maximum rate of photosynthetic electron transport (J max ) at site scale, and then the two optimized parameters were introduced into BEPSDaily model to estimate regional NPP at regional scale. The results showed that optimization of the main photosynthesis parameters based on the flux data could improve the simulate ability of the model. The primary productivity of different forest types in descending order was deciduous broad-leaved forest, mixed forest, coniferous forest in 2011. The collaborative application of carbon cycle models at different steps proposed in this study could effectively optimize the main photosynthesis parameters V c max and J max , simulate the monthly averaged diurnal GPP, NPP, calculate the regional NPP, and analyze the spatial distribution of regional carbon sources and sinks.
Time step size limitation introduced by the BSSN Gamma Driver
Energy Technology Data Exchange (ETDEWEB)
Schnetter, Erik, E-mail: schnetter@cct.lsu.ed [Department of Physics and Astronomy, Louisiana State University, LA (United States)
2010-08-21
Many mesh refinement simulations currently performed in numerical relativity counteract instabilities near the outer boundary of the simulation domain either by changes to the mesh refinement scheme or by changes to the gauge condition. We point out that the BSSN Gamma Driver gauge condition introduces a time step size limitation in a similar manner as a Courant-Friedrichs-Lewy condition, but which is independent of the spatial resolution. We give a didactic explanation of this issue, show why, especially, mesh refinement simulations suffer from it, and point to a simple remedy. (note)
Multiple-time-stepping generalized hybrid Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Optimal order and time-step criterion for Aarseth-type N-body integrators
International Nuclear Information System (INIS)
Makino, Junichiro
1991-01-01
How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs
FTSPlot: fast time series visualization for large datasets.
Directory of Open Access Journals (Sweden)
Michael Riss
Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.
Methods for growth of relatively large step-free SiC crystal surfaces
Neudeck, Philip G. (Inventor); Powell, J. Anthony (Inventor)
2002-01-01
A method for growing arrays of large-area device-size films of step-free (i.e., atomically flat) SiC surfaces for semiconductor electronic device applications is disclosed. This method utilizes a lateral growth process that better overcomes the effect of extended defects in the seed crystal substrate that limited the obtainable step-free area achievable by prior art processes. The step-free SiC surface is particularly suited for the heteroepitaxial growth of 3C (cubic) SiC, AlN, and GaN films used for the fabrication of both surface-sensitive devices (i.e., surface channel field effect transistors such as HEMT's and MOSFET's) as well as high-electric field devices (pn diodes and other solid-state power switching devices) that are sensitive to extended crystal defects.
Time series clustering in large data sets
Directory of Open Access Journals (Sweden)
Jiří Fejfar
2011-01-01
Full Text Available The clustering of time series is a widely researched area. There are many methods for dealing with this task. We are actually using the Self-organizing map (SOM with the unsupervised learning algorithm for clustering of time series. After the first experiment (Fejfar, Weinlichová, Šťastný, 2009 it seems that the whole concept of the clustering algorithm is correct but that we have to perform time series clustering on much larger dataset to obtain more accurate results and to find the correlation between configured parameters and results more precisely. The second requirement arose in a need for a well-defined evaluation of results. It seems useful to use sound recordings as instances of time series again. There are many recordings to use in digital libraries, many interesting features and patterns can be found in this area. We are searching for recordings with the similar development of information density in this experiment. It can be used for musical form investigation, cover songs detection and many others applications.The objective of the presented paper is to compare clustering results made with different parameters of feature vectors and the SOM itself. We are describing time series in a simplistic way evaluating standard deviations for separated parts of recordings. The resulting feature vectors are clustered with the SOM in batch training mode with different topologies varying from few neurons to large maps.There are other algorithms discussed, usable for finding similarities between time series and finally conclusions for further research are presented. We also present an overview of the related actual literature and projects.
A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis
Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann
2017-04-01
The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for
Parallel time domain solvers for electrically large transient scattering problems
Liu, Yang
2014-09-26
Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.
Coherent states for the time dependent harmonic oscillator: the step function
International Nuclear Information System (INIS)
Moya-Cessa, Hector; Fernandez Guasti, Manuel
2003-01-01
We study the time evolution for the quantum harmonic oscillator subjected to a sudden change of frequency. It is based on an approximate analytic solution to the time dependent Ermakov equation for a step function. This approach allows for a continuous treatment that differs from former studies that involve the matching of two time independent solutions at the time when the step occurs
Large eddy simulation of turbulent premixed combustion flows over backward facing step
Energy Technology Data Exchange (ETDEWEB)
Park, Nam Seob [Yuhan University, Bucheon (Korea, Republic of); Ko, Sang Cheol [Jeju National University, Jeju (Korea, Republic of)
2011-03-15
Large eddy simulation (LES) of turbulent premixed combustion flows over backward facing step has been performed using a dynamic sub-grid G-equation flamelet model. A flamelet model for the premixed flame is combined with a dynamic sub-grid combustion model for the filtered propagation of flame speed. The objective of this study is to investigate the validity of the dynamic sub-grid G-equation model in a complex turbulent premixed combustion flow. For the purpose of validating the LES combustion model, the LES of isothermal and reacting shear layer formed at a backward facing step is carried out. The calculated results are compared with the experimental results, and a good agreement is obtained.
Large eddy simulation of turbulent premixed combustion flows over backward facing step
International Nuclear Information System (INIS)
Park, Nam Seob; Ko, Sang Cheol
2011-01-01
Large eddy simulation (LES) of turbulent premixed combustion flows over backward facing step has been performed using a dynamic sub-grid G-equation flamelet model. A flamelet model for the premixed flame is combined with a dynamic sub-grid combustion model for the filtered propagation of flame speed. The objective of this study is to investigate the validity of the dynamic sub-grid G-equation model in a complex turbulent premixed combustion flow. For the purpose of validating the LES combustion model, the LES of isothermal and reacting shear layer formed at a backward facing step is carried out. The calculated results are compared with the experimental results, and a good agreement is obtained
Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)
Pestana, Reynam C.
2009-01-01
We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.
Large step-down DC-DC converters with reduced current stress
International Nuclear Information System (INIS)
Ismail, Esam H.
2009-01-01
In this paper, several DC-DC converters with large voltage step-down ratios are introduced. A simple modification in the output section of the conventional buck and quadratic converters can effectively extend the duty-cycle range. Only two additional components (an inductor and diode) are necessary for extending the duty-cycle range. The topologies presented in this paper show an improvement in the duty-cycle (about 40%) over the conventional buck and quadratic converters. Consequently, they are well suited for extreme step-down voltage conversion ratio applications. With extended duty-cycle, the current stress on all components is reduced, leading to a significant improvement of the system losses. The principle of operation, theoretical analysis, and comparison of circuit performances with other step-down converters is discussed regarding voltage and current stress. Experimental results of one prototype rated 40-W and operating at 100 kHz are provided in this paper to verify the performance of this new family of converters. The efficiency of the proposed converters is higher than the quadratic converters
Qi, Huijie; Niu, Lihong; Zhang, Jie; Chen, Jian; Wang, Shujie; Yang, Jingjing; Guo, Siyi; Lawson, Tom; Shi, Bingyang; Song, Chunpeng
2018-04-01
Surface plasmon resonance (SPR) nanosensors based on metallic nanohole arrays have been widely reported to detect binding interactions in biological specimens. A simple and effective method for constructing nanoscale arrays is essential for the development of SPR nanosensors. In this work, we report a one-step method to fabricate nanohole arrays by thermal nanoimprinting in the matrix of IPS (Intermediate Polymer Stamp). No additional etching process or supporting substrate is required. The preparation process is simple, time-saving and compatible for roll-to-roll process, potentially allowing mass production. Moreover, the nanohole arrays were integrated into detection platform as SPR sensors to investigate different types of biological binding interactions. The results demonstrate that our one-step method can be used to efficiently fabricate large-area and uniform nanohole arrays for biochemical sensing.
Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs
Hadjimichael, Yiannis
2017-09-30
A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions
Solving point reactor kinetic equations by time step-size adaptable numerical methods
International Nuclear Information System (INIS)
Liao Chaqing
2007-01-01
Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)
Coenen, Pieter; Healy, Genevieve N; Winkler, Elisabeth A H; Dunstan, David W; Owen, Neville; Moodie, Marj; LaMontagne, Anthony D; Eakin, Elizabeth A; O'Sullivan, Peter B; Straker, Leon M
2018-04-22
We examined the association of musculoskeletal symptoms (MSS) with workplace sitting, standing and stepping time, as well as sitting and standing time accumulation (i.e. usual bout duration of these activities), measured objectively with the activPAL3 monitor. Using baseline data from the Stand Up Victoria trial (216 office workers, 14 workplaces), cross-sectional associations of occupational activities with self-reported MSS (low-back, upper and lower extremity symptoms in the last three months) were examined using probit regression, correcting for clustering and adjusting for confounders. Sitting bout duration was significantly (p < 0.05) associated, non-linearly, with MSS, such that those in the middle tertile displayed the highest prevalence of upper extremity symptoms. Other associations were non-significant but sometimes involved large differences in symptom prevalence (e.g. 38%) by activity. Though causation is unclear, these non-linear associations suggest that sitting and its alternatives (i.e. standing and stepping) interact with MSS and this should be considered when designing safe work systems. Practitioner summary: We studied associations of objectively assessed occupational activities with musculoskeletal symptoms in office workers. Workers who accumulated longer sitting bouts reported fewer upper extremity symptoms. Total activity duration was not significantly associated with musculoskeletal symptoms. We underline the importance of considering total volumes and patterns of activity time in musculoskeletal research.
Parallel time domain solvers for electrically large transient scattering problems
Liu, Yang; Yucel, Abdulkadir; Bagcý , Hakan; Michielssen, Eric
2014-01-01
scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary
On an efficient multiple time step Monte Carlo simulation of the SABR model
Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.
Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET
Directory of Open Access Journals (Sweden)
B. Ghahraman
2016-02-01
.7. Therefore, nine different classes were formed by combination of three crop types and three soil class types. Then, the results of numerical methods were compared to the analytical solution of the soil moisture differential equation as a datum. Three factors (time step, initial soil water content, and maximum evaporation, ETc were considered as influencing variables. Results and Discussion: It was clearly shown that as the crops becomes more sensitive, the dependency of Eta to ETc increases. The same is true as the soil becomes fine textured. The results showed that as water stress progress during the time step, relative errors of computed ET by different numerical methods did not depend on initial soil moisture. On overall and irrespective to soil tpe, crop type, and numerical method, relative error increased by increasing time step and/or increasing ETc. On overall, the absolute errors were negative for implicit Euler and third order Heun, while for other methods were positive. There was a systematic trend for relative error, as it increased by sandier soil and/or crop sensitivity. Absolute errors of ET computations decreased with consecutive time steps, which ensures the stability of water balance predictions. It was not possible to prescribe a unique numerical method for considering all variables. For comparing the numerical methods, however, we took the largest relative error corresponding to 10-day time step and ETc equal to 12 mm.d-1, while considered soil and crop types as variable. Explicit Euler was unstable and varied between 40% and 150%. Implicit Euler was robust and its relative error was around 20% for all combinations of soil and crop types. Unstable pattern was governed for modified Euler. The relative error was as low as 10% only for two cases while on overall it ranged between 20% and 100%. Although the relative errors of third order Heun were the smallest among the all methods, its robustness was not as good as implicit Euler method. Excluding one large
Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems
Majumdar, Alok K.; Ravindran, S. S.
2017-01-01
Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.
Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps
Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo
2016-01-01
Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Some Comments on the Behavior of the RELAP5 Numerical Scheme at Very Small Time Steps
International Nuclear Information System (INIS)
Tiselj, Iztok; Cerne, Gregor
2000-01-01
The behavior of the RELAP5 code at very short time steps is described, i.e., δt [approximately equal to] 0.01 δx/c. First, the property of the RELAP5 code to trace acoustic waves with 'almost' second-order accuracy is demonstrated. Quasi-second-order accuracy is usually achieved for acoustic waves at very short time steps but can never be achieved for the propagation of nonacoustic temperature and void fraction waves. While this feature may be beneficial for the simulations of fast transients describing pressure waves, it also has an adverse effect: The lack of numerical diffusion at very short time steps can cause typical second-order numerical oscillations near steep pressure jumps. This behavior explains why an automatic halving of the time step, which is used in RELAP5 when numerical difficulties are encountered, in some cases leads to the failure of the simulation.Second, the integration of the stiff interphase exchange terms in RELAP5 is studied. For transients with flashing and/or rapid condensation as the main phenomena, results strongly depend on the time step used. Poor accuracy is achieved with 'normal' time steps (δt [approximately equal to] δx/v) because of the very short characteristic timescale of the interphase mass and heat transfer sources. In such cases significantly different results are predicted with very short time steps because of the more accurate integration of the stiff interphase exchange terms
Nutt, John G.; Horak, Fay B.
2011-01-01
Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431
2010-07-01
... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... WATER REGULATIONS Control of Lead and Copper § 141.81 Applicability of corrosion control treatment steps...). (ii) A report explaining the test methods used by the water system to evaluate the corrosion control...
One-step electrodeposition process of CuInSe2: Deposition time effect
Indian Academy of Sciences (India)
Administrator
CuInSe2 thin films were prepared by one-step electrodeposition process using a simplified two- electrodes system. ... homojunctions or heterojunctions (Rincon et al 1983). Efficiency of ... deposition times onto indium thin oxide (ITO)-covered.
Stability analysis and time-step limits for a Monte Carlo Compton-scattering method
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.
2010-01-01
A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.
Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing
Directory of Open Access Journals (Sweden)
Qianghui Zhang
2016-07-01
Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.
Guermond, J.-L.; Salgado, Abner J.
2011-01-01
In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.
Peng, Yi; Zhang, Jie; Li, Dong
2018-03-01
A large wastewater treatment plant (WWTP) could not meet the new demand of urban environment and the need of reclaimed water in China, using a US treatment technology. Thus a multi AO reaction process (Anaerobic/oxic/anoxic/oxic/anoxic/oxic) WWTP with underground structure was proposed to carry out the upgrade project. Four main new technologies were applied: (1) multi AO reaction with step feed technology; (2) deodorization; (3) new energy-saving technology such as water resource heat pump and optical fiber lighting system; (4) dependable old WWTP’s water quality support measurement during new WWTP’s construction. After construction, upgrading WWTP had saved two thirds land occupation, increased 80% treatment capacity and improved effluent standard by more than two times. Moreover, it had become a benchmark of an ecological negative capital changing to a positive capital.
Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul
2013-07-21
Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.
Sivak, David A; Chodera, John D; Crooks, Gavin E
2014-06-19
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
Time dependent theory of two-step absorption of two pulses
Energy Technology Data Exchange (ETDEWEB)
Rebane, Inna, E-mail: inna.rebane@ut.ee
2015-09-25
The time dependent theory of two step-absorption of two different light pulses with arbitrary duration in the electronic three-level model is proposed. The probability that the third level is excited at the moment t is found in depending on the time delay between pulses, the spectral widths of the pulses and the energy relaxation constants of the excited electronic levels. The time dependent perturbation theory is applied without using “doorway–window” approach. The time and spectral behavior of the spectrum using in calculations as simple as possible model is analyzed. - Highlights: • Time dependent theory of two-step absorption in the three-level model is proposed. • Two different light pulses with arbitrary duration is observed. • The time dependent perturbation theory is applied without “door–window” approach. • The time and spectral behavior of the spectra is analyzed for several cases.
The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays
Energy Technology Data Exchange (ETDEWEB)
Gadella, M. [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain); Kuru, Ş. [Department of Physics, Faculty of Science, Ankara University, 06100 Ankara (Turkey); Negro, J., E-mail: jnegro@fta.uva.es [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain)
2017-04-15
We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays for the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea
2014-10-31
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun
2014-01-01
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Directory of Open Access Journals (Sweden)
Mona Hassan Aburahma
2015-07-01
Full Text Available Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated.
150 Mb/s wifi transmission over 50m large core diameter step index POF
Shi, Y.; Nieto Munoz, M.; Okonkwo, C.M.; Boom, van den H.P.A.; Tangdiongga, E.; Koonen, A.M.J.
2011-01-01
We demonstrate successful transmission of WiFi signals over 50m step-index plastic optical fibre with 1mm core diameter employing an eye-safe resonant cavity light emitting diode and an avalanche photodetector. The EVM performance of 4.1% at signal data rate of 150Mb/s is achieved.
Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)
Pestana, Reynam C.; Stoffa, Paul L.
2009-01-01
an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-04-01
An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.
Directory of Open Access Journals (Sweden)
Romain Tisserand
2016-11-01
Full Text Available In the case of disequilibrium, the capacity to step quickly is critical to avoid falling for elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT, where elderly fallers (F take longer to step than elderly non-fallers (NF. However, reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA that elderly F develop in a stepping context and their consequences on the dynamic stability. 44 community-dwelling elderly subjects (20 F and 22 NF performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP; in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall.
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
2012-06-01
The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...
Multi-step-prediction of chaotic time series based on co-evolutionary recurrent neural network
International Nuclear Information System (INIS)
Ma Qianli; Zheng Qilun; Peng Hong; Qin Jiangwei; Zhong Tanwei
2008-01-01
This paper proposes a co-evolutionary recurrent neural network (CERNN) for the multi-step-prediction of chaotic time series, it estimates the proper parameters of phase space reconstruction and optimizes the structure of recurrent neural networks by co-evolutionary strategy. The searching space was separated into two subspaces and the individuals are trained in a parallel computational procedure. It can dynamically combine the embedding method with the capability of recurrent neural network to incorporate past experience due to internal recurrence. The effectiveness of CERNN is evaluated by using three benchmark chaotic time series data sets: the Lorenz series, Mackey-Glass series and real-world sun spot series. The simulation results show that CERNN improves the performances of multi-step-prediction of chaotic time series
Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids
International Nuclear Information System (INIS)
Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.
2017-01-01
Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determined to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.
Development of a real time activity monitoring Android application utilizing SmartStep.
Hegde, Nagaraj; Melanson, Edward; Sazonov, Edward
2016-08-01
Footwear based activity monitoring systems are becoming popular in academic research as well as consumer industry segments. In our previous work, we had presented developmental aspects of an insole based activity and gait monitoring system-SmartStep, which is a socially acceptable, fully wireless and versatile insole. The present work describes the development of an Android application that captures the SmartStep data wirelessly over Bluetooth Low energy (BLE), computes features on the received data, runs activity classification algorithms and provides real time feedback. The development of activity classification methods was based on the the data from a human study involving 4 participants. Participants were asked to perform activities of sitting, standing, walking, and cycling while they wore SmartStep insole system. Multinomial Logistic Discrimination (MLD) was utilized in the development of machine learning model for activity prediction. The resulting classification model was implemented in an Android Smartphone. The Android application was benchmarked for power consumption and CPU loading. Leave one out cross validation resulted in average accuracy of 96.9% during model training phase. The Android application for real time activity classification was tested on a human subject wearing SmartStep resulting in testing accuracy of 95.4%.
National Aeronautics and Space Administration — TRS Technologies proposes novel single crystal piezomotors for large torque, high precision, and cryogenic actuation with capability of position set-hold with...
Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems
Directory of Open Access Journals (Sweden)
H. Vincent Poor
2008-05-01
Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.
Wang, Zhan-zhi; Xiong, Ying
2013-04-01
A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.
International Nuclear Information System (INIS)
Kim, Yong-Sik; Dagalakis, Nicholas G; Gupta, Satyandra K
2013-01-01
Realizing out-of-plane actuation in micro-electro-mechanical systems (MEMS) is still a challenging task. In this paper, the design, fabrication methods and experimental results for a MEMS-based out-of-plane motion stage are presented based on bulk micromachining technologies. This stage is electrothermally actuated for out-of-plane motion by incorporating beams with step features. The fabricated motion stage has demonstrated displacements of 85 µm with 0.4 µm (mA) −1 rates and generated up to 11.8 mN forces with stiffness of 138.8 N m −1 . These properties obtained from the presented stage are comparable to those for in-plane motion stages, therefore making this out-of-plane stage useful when used in combination with in-plane motion stages. (paper)
Steps of Supercritical Fluid Extraction of Natural Products and Their Characteristic Times
Sovová, H. (Helena)
2012-01-01
Kinetics of supercritical fluid extraction (SFE) from plants is variable due to different micro-structure of plants and their parts, different properties of extracted substances and solvents, and different flow patterns in the extractor. Variety of published mathematical models for SFE of natural products corresponds to this diversification. This study presents simplified equations of extraction curves in terms of characteristic times of four single extraction steps: internal diffusion, exter...
Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît
2016-04-12
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.
A three-step algorithm for CANDECOMP/PARAFAC analysis of large data sets with multicollinearity
Kiers, H.A.L.
1998-01-01
Fitting the CANDECOMP/PARAFAC model by the standard alternating least squares algorithm often requires very many iterations. One case in point is that of analysing data with mild to severe multicollinearity. If, in addition, the size of the data is large, the computation of one CANDECOMP/PARAFAC
One-step large-scale deposition of salt-free DNA origami nanostructures
Linko, Veikko; Shen, Boxuan; Tapio, Kosti; Toppari, J. Jussi; Kostiainen, Mauri A.; Tuukkanen, Sampo
2015-01-01
DNA origami nanostructures have tremendous potential to serve as versatile platforms in self-assembly -based nanofabrication and in highly parallel nanoscale patterning. However, uniform deposition and reliable anchoring of DNA nanostructures often requires specific conditions, such as pre-treatment of the chosen substrate or a fine-tuned salt concentration for the deposition buffer. In addition, currently available deposition techniques are suitable merely for small scales. In this article, we exploit a spray-coating technique in order to resolve the aforementioned issues in the deposition of different 2D and 3D DNA origami nanostructures. We show that purified DNA origamis can be controllably deposited on silicon and glass substrates by the proposed method. The results are verified using either atomic force microscopy or fluorescence microscopy depending on the shape of the DNA origami. DNA origamis are successfully deposited onto untreated substrates with surface coverage of about 4 objects/mm2. Further, the DNA nanostructures maintain their shape even if the salt residues are removed from the DNA origami fabrication buffer after the folding procedure. We believe that the presented one-step spray-coating method will find use in various fields of material sciences, especially in the development of DNA biochips and in the fabrication of metamaterials and plasmonic devices through DNA metallisation. PMID:26492833
Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng
2014-04-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Representative elements: A step to large-scale fracture system simulation
International Nuclear Information System (INIS)
Clemo, T.M.
1987-01-01
Large-scale simulation of flow and transport in fractured media requires the development of a technique to represent the effect of a large number of fractures. Representative elements are used as a tool to model a subset of a fracture system as a single distributed entity. Representative elements are part of a modeling concept called dual permeability. Dual permeability modeling combines discrete fracture simulation of the most important fractures with the distributed modeling of the less important fracture of a fracture system. This study investigates the use of stochastic analysis to determine properties of representative elements. Given an assumption of fully developed laminar flow, the net fracture conductivities and hence flow velocities can be determined from descriptive statistics of fracture spacing, orientation, aperture, and extent. The distribution of physical characteristics about their mean leads to a distribution of the associated conductivities. The variance of hydraulic conductivity induces dispersion into the transport process. Simple fracture systems are treated to demonstrate the usefulness of stochastic analysis. Explicit equations for conductivity of an element are developed and the dispersion characteristics are shown. Explicit formulation of the hydraulic conductivity and transport dispersion reveals the dependence of these important characteristics on the parameters used to describe the fracture system. Understanding these dependencies will help to focus efforts to identify the characteristics of fracture systems. Simulations of stochastically generated fracture sets do not provide this explicit functional dependence on the fracture system parameters. 12 refs., 6 figs
Discrete-time optimal control and games on large intervals
Zaslavski, Alexander J
2017-01-01
Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...
Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca
2017-12-01
An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.
Intake flow and time step analysis in the modeling of a direct injection Diesel engine
Energy Technology Data Exchange (ETDEWEB)
Zancanaro Junior, Flavio V.; Vielmo, Horacio A. [Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Mechanical Engineering Dept.], E-mails: zancanaro@mecanica.ufrgs.br, vielmoh@mecanica.ufrgs.br
2010-07-01
This paper discusses the effects of the time step on turbulence flow structure in the intake and in-cylinder systems of a Diesel engine during the intake process, under the motored condition. The three-dimensional modeling of a reciprocating engine geometry comprising a bowl-in-piston combustion chamber, intake port of shallow ramp helical type and exhaust port of conventional type. The equations are numerically solved, including a transient analysis, valves and piston movements, for engine speed of 1500 rpm, using a commercial Finite Volumes CFD code. A parallel computation is employed. For the purpose of examining the in-cylinder turbulence characteristics two parameters are observed: the discharge coefficient and swirl ratio. This two parameters quantify the fluid flow characteristics inside cylinder in the intake stroke, therefore, it is very important their study and understanding. Additionally, the evolution of the discharge coefficient and swirl ratio, along crank angle, are correlated and compared, with the objective of clarifying the physical mechanisms. Regarding the turbulence, computations are performed with the Eddy Viscosity Model k-u SST, in its Low-Reynolds approaches, with standard near wall treatment. The system of partial differential equations to be solved consists of the Reynolds-averaged compressible Navier-Stokes equations with the constitutive relations for an ideal gas, and using a segregated solution algorithm. The enthalpy equation is also solved. A moving hexahedral trimmed mesh independence study is presented. In the same way many convergence tests are performed, and a secure criterion established. The results of the pressure fields are shown in relation to vertical plane that passes through the valves. Areas of low pressure can be seen in the valve curtain region, due to strong jet flows. Also, it is possible to note divergences between the time steps, mainly for the smaller time step. (author)
Large Deviations for Two-Time-Scale Diffusions, with Delays
International Nuclear Information System (INIS)
Kushner, Harold J.
2010-01-01
We consider the problem of large deviations for a two-time-scale reflected diffusion process, possibly with delays in the dynamical terms. The Dupuis-Ellis weak convergence approach is used. It is perhaps the most intuitive and simplest for the problems of concern. The results have applications to the problem of approximating optimal controls for two-time-scale systems via use of the averaged equation.
Smart Wireless Power Transfer Operated by Time-Modulated Arrays via a Two-Step Procedure
Directory of Open Access Journals (Sweden)
Diego Masotti
2015-01-01
Full Text Available The paper introduces a novel method for agile and precise wireless power transmission operated by a time-modulated array. The unique, almost real-time reconfiguration capability of these arrays is fully exploited by a two-step procedure: first, a two-element time-modulated subarray is used for localization of tagged sensors to be energized; the entire 16-element TMA then provides the power to the detected tags, by exploiting the fundamental and first-sideband harmonic radiation. An investigation on the best array architecture is carried out, showing the importance of the adopted nonlinear/full-wave computer-aided-design platform. Very promising simulated energy transfer performance of the entire nonlinear radiating system is demonstrated.
Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain
2017-10-01
We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.
A one-step, real-time PCR assay for rapid detection of rhinovirus.
Do, Duc H; Laus, Stella; Leber, Amy; Marcon, Mario J; Jordan, Jeanne A; Martin, Judith M; Wadowsky, Robert M
2010-01-01
One-step, real-time PCR assays for rhinovirus have been developed for a limited number of PCR amplification platforms and chemistries, and some exhibit cross-reactivity with genetically similar enteroviruses. We developed a one-step, real-time PCR assay for rhinovirus by using a sequence detection system (Applied Biosystems; Foster City, CA). The primers were designed to amplify a 120-base target in the noncoding region of picornavirus RNA, and a TaqMan (Applied Biosystems) degenerate probe was designed for the specific detection of rhinovirus amplicons. The PCR assay had no cross-reactivity with a panel of 76 nontarget nucleic acids, which included RNAs from 43 enterovirus strains. Excellent lower limits of detection relative to viral culture were observed for the PCR assay by using 38 of 40 rhinovirus reference strains representing different serotypes, which could reproducibly detect rhinovirus serotype 2 in viral transport medium containing 10 to 10,000 TCID(50) (50% tissue culture infectious dose endpoint) units/ml of the virus. However, for rhinovirus serotypes 59 and 69, the PCR assay was less sensitive than culture. Testing of 48 clinical specimens from children with cold-like illnesses for rhinovirus by the PCR and culture assays yielded detection rates of 16.7% and 6.3%, respectively. For a batch of 10 specimens, the entire assay was completed in 4.5 hours. This real-time PCR assay enables detection of many rhinovirus serotypes with the Applied Biosystems reagent-instrument platform.
Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential
International Nuclear Information System (INIS)
Zhang Ying; Liang Haozhao; Meng Jie
2009-01-01
The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus 12 C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.
Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation
Directory of Open Access Journals (Sweden)
Xiaokun Wang
2017-01-01
Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.
The enhancement of time-stepping procedures in SYVAC A/C
International Nuclear Information System (INIS)
Broyd, T.W.
1986-01-01
This report summarises the work carried out an SYVAC A/C between February and May 1985 aimed at improving the way in which time-stepping procedures are handled. The majority of the work was concerned with three types of problem, viz: i) Long vault release, short geosphere response ii) Short vault release, long geosphere response iii) Short vault release, short geosphere response The report contains details of changes to the logic and structure of SYVAC A/C, as well as the results of code implementation tests. It has been written primarily for members of the UK SYVAC development team, and should not be used or referred to in isolation. (author)
Time dispersion in large plastic scintillation neutron detectors
International Nuclear Information System (INIS)
De, A.; Dasgupta, S.S.; Sen, D.
1993-01-01
Time dispersion (TD) has been computed for large neutron detectors using plastic scintillators. It has been shown that TD seen by the PM tube does not necessarily increase with incident neutron energy, a result not fully in agreement with the usual finding
Vibration amplitude rule study for rotor under large time scale
International Nuclear Information System (INIS)
Yang Xuan; Zuo Jianli; Duan Changcheng
2014-01-01
The rotor is an important part of the rotating machinery; its vibration performance is one of the important factors affecting the service life. This paper presents both theoretical analyses and experimental demonstrations of the vibration rule of the rotor under large time scales. The rule can be used for the service life estimation of the rotor. (authors)
The Large Observatory For x-ray Timing
DEFF Research Database (Denmark)
Feroci, M.; Herder, J. W. den; Bozzo, E.
2014-01-01
The Large Observatory For x-ray Timing (LOFT) was studied within ESA M3 Cosmic Vision framework and participated in the final down-selection for a launch slot in 2022-2024. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument, LOFT will study th...
Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism
International Nuclear Information System (INIS)
Stolterfoht, N.
1993-01-01
The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)
Real-time simulation of large-scale floods
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
De Basabe, Joná s D.; Sen, Mrinal K.
2010-01-01
popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM
Geevers, Sjoerd; van der Vegt, J.J.W.
2017-01-01
We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
Bunce, David; Haynes, Becky I; Lord, Stephen R; Gschwind, Yves J; Kochan, Nicole A; Reppermund, Simone; Brodaty, Henry; Sachdev, Perminder S; Delbaere, Kim
2017-06-01
Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI) are at high risk of falling, the association between IIV and prospective falls is unknown. We conducted a longitudinal cohort study in cognitively intact (n = 271) and MCI (n = 154) community-dwelling adults aged 70-90 years. IIV was assessed through a variety of measures including simple and choice hand reaction time and choice stepping reaction time tasks (CSRT), the latter administered as a single task and also with a secondary working memory task. Logistic regression did not show an association between IIV on the hand-held tasks and falls. Greater IIV in both CSRT tasks, however, did significantly increase the risk of future falls. This effect was specific to the MCI group, with a stronger effect in persons exhibiting gait, posture, or physiological impairment. The findings suggest that increased stepping IIV may indicate compromised neural circuitry involved in executive function, gait, and posture in persons with MCI increasing their risk of falling. IIV measures have potential to assess neurobiological disturbance underlying physical and cognitive dysfunction in old age, and aid fall risk assessment and routine care in community and health care settings. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Time simulation of flutter with large stiffness changes
Karpel, Mordechay; Wieseman, Carol D.
1992-01-01
Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.
Development of real time diagnostics and feedback algorithms for JET in view of the next step
International Nuclear Information System (INIS)
Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.
2004-01-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Development of real time diagnostics and feedback algorithms for JET in view of the next step
International Nuclear Information System (INIS)
Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.
2004-01-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
Development of real time diagnostics and feedback algorithms for JET in view of the next step
Energy Technology Data Exchange (ETDEWEB)
Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)
2004-07-01
Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)
International Nuclear Information System (INIS)
Omelyan, Igor; Kovalenko, Andriy
2013-01-01
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics
Dwell time considerations for large area cold plasma decontamination
Konesky, Gregory
2009-05-01
Atmospheric discharge cold plasmas have been shown to be effective in the reduction of pathogenic bacteria and spores and in the decontamination of simulated chemical warfare agents, without the generation of toxic or harmful by-products. Cold plasmas may also be useful in assisting cleanup of radiological "dirty bombs." For practical applications in realistic scenarios, the plasma applicator must have both a large area of coverage, and a reasonably short dwell time. However, the literature contains a wide range of reported dwell times, from a few seconds to several minutes, needed to achieve a given level of reduction. This is largely due to different experimental conditions, and especially, different methods of generating the decontaminating plasma. We consider these different approaches and attempt to draw equivalencies among them, and use this to develop requirements for a practical, field-deployable plasma decontamination system. A plasma applicator with 12 square inches area and integral high voltage, high frequency generator is described.
Large Time Behavior of the Vlasov-Poisson-Boltzmann System
Directory of Open Access Journals (Sweden)
Li Li
2013-01-01
Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.
Hoepfer, Matthias
co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
Just-in-time connectivity for large spiking networks.
Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-11-01
The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.
Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A
2013-08-01
Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process.
Avoid the tsunami of the Dirac sea in the imaginary time step method
International Nuclear Information System (INIS)
Zhang, Ying; Liang, Haozhao; Meng, Jie
2010-01-01
The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)
Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation
International Nuclear Information System (INIS)
Boer, J. de; Dannhaueser, G.
1982-01-01
The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)
Detection and Correction of Step Discontinuities in Kepler Flux Time Series
Kolodziejczak, J. J.; Morris, R. L.
2011-01-01
PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].
Hsu, Ming-Chen
2010-02-01
The objective of this paper is to show that use of the element-vector-based definition of stabilization parameters, introduced in [T.E. Tezduyar, Computation of moving boundaries and interfaces and stabilization parameters, Int. J. Numer. Methods Fluids 43 (2003) 555-575; T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Engrg. 190 (2000) 411-430], circumvents the well-known instability associated with conventional stabilized formulations at small time steps. We describe formulations for linear advection-diffusion and incompressible Navier-Stokes equations and test them on three benchmark problems: advection of an L-shaped discontinuity, laminar flow in a square domain at low Reynolds number, and turbulent channel flow at friction-velocity Reynolds number of 395. © 2009 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Suh, Yong Suk; Keum, Jong Yong; Kim, Dong Hoon; Kang, Hyeon Tae; Sung, Chan Ho; Lee, Jae Ki; Cho, Chang Hwan
2009-01-01
According to the IAEA report as of Jan. 2008, 436 nuclear power reactors are in operation over the world and 368 nuclear power reactors exceed their operating ages by 20 years. The average I and C equipment's life span is 20 years comparing with that the average reactor's life time is 40 to 60 years. This means that a reactor must be faced with I and C equipment obsolescence problems once or twice during its operating years. The I and C equipment is replaced with new equipment only when the obsolescence problem occurs in a nuclear power plant. This is called an equipment basis upgrade in this paper. This replacement is such a general practice that occurs only when needed. We can assume that most of I and C equipment of a plant will meet with the obsolescence problem almost same time since it started operating. Although there must be a little time difference in the occurrence of the problems among I and C equipment, the replacement will be required in consecutive years. With this assumption, it is recommendable to upgrade the equipment, which is to meet with the problem at the same time, with new equipment at the same time. This is called a system basis upgrade in this paper. The system-basis replacement can be achieved in a large scale by coupling systems whose functions are related each other and replacing them together with a new upto- date platform. This paper focuses on the large scale upgrade of I and C systems for existing and operating NPPs. While performing a feasibility study for the large scale upgrade for Korea standard nuclear power plants (KSNPs), six major steps are developed for the study. This paper is to present what to perform in each step
Unterweger, K.
2015-01-01
© Springer International Publishing Switzerland 2015. We propose to couple our adaptive mesh refinement software PeanoClaw with existing solvers for complex overland flows that are tailored to regular Cartesian meshes. This allows us to augment them with spatial adaptivity and local time-stepping without altering the computational kernels. FullSWOF2D—Full Shallow Water Overland Flows—here is our software of choice though all paradigms hold for other solvers as well.We validate our hybrid simulation software in an artificial test scenario before we provide results for a large-scale flooding scenario of the Mecca region. The latter demonstrates that our coupling approach enables the simulation of complex “real-world” scenarios.
Large holographic displays for real-time applications
Schwerdtner, A.; Häussler, R.; Leister, N.
2008-02-01
Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.
International Nuclear Information System (INIS)
Schneeberger, B.; Breuleux, R.
1977-01-01
Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)
Detection of Tomato black ring virus by real-time one-step RT-PCR.
Harper, Scott J; Delmiglio, Catia; Ward, Lisa I; Clover, Gerard R G
2011-01-01
A TaqMan-based real-time one-step RT-PCR assay was developed for the rapid detection of Tomato black ring virus (TBRV), a significant plant pathogen which infects a wide range of economically important crops. Primers and a probe were designed against existing genomic sequences to amplify a 72 bp fragment from RNA-2. The assay amplified all isolates of TBRV tested, but no amplification was observed from the RNA of other nepovirus species or healthy host plants. The detection limit of the assay was estimated to be around nine copies of the TBRV target region in total RNA. A comparison with conventional RT-PCR and ELISA, indicated that ELISA, the current standard test method, lacked specificity and reacted to all nepovirus species tested, while conventional RT-PCR was approximately ten-fold less sensitive than the real-time RT-PCR assay. Finally, the real-time RT-PCR assay was tested using five different RT-PCR reagent kits and was found to be robust and reliable, with no significant differences in sensitivity being found. The development of this rapid assay should aid in quarantine and post-border surveys for regulatory agencies. Copyright © 2010 Elsevier B.V. All rights reserved.
Process evaluation of treatment times in a large radiotherapy department
International Nuclear Information System (INIS)
Beech, R.; Burgess, K.; Stratford, J.
2016-01-01
Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths
Irregular Morphing for Real-Time Rendering of Large Terrain
Directory of Open Access Journals (Sweden)
S. Kalem
2016-06-01
Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.
Yalin, Azer P; Joshi, Sachin
2014-06-03
An apparatus and method for transmission of laser pulses with high output beam quality using large core step-index silica optical fibers having thick cladding, are described. The thick cladding suppresses diffusion of modal power to higher order modes at the core-cladding interface, thereby enabling higher beam quality, M.sup.2, than are observed for large core, thin cladding optical fibers. For a given NA and core size, the thicker the cladding, the better the output beam quality. Mode coupling coefficients, D, has been found to scale approximately as the inverse square of the cladding dimension and the inverse square root of the wavelength. Output from a 2 m long silica optical fiber having a 100 .mu.m core and a 660 .mu.m cladding was found to be close to single mode, with an M.sup.2=1.6. Another thick cladding fiber (400 .mu.m core and 720 .mu.m clad) was used to transmit 1064 nm pulses of nanosecond duration with high beam quality to form gas sparks at the focused output (focused intensity of >100 GW/cm.sup.2), wherein the energy in the core was laser pulses was about 6 ns. Extending the pulse duration provided the ability to increase the delivered pulse energy (>20 mJ delivered for 50 ns pulses) without damaging the silica fiber.
Chidori, Kazuhiro; Yamamoto, Yuji
2017-01-01
The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.
Directory of Open Access Journals (Sweden)
Po Hu
2016-01-01
Full Text Available Implementing real-time machining process control at shop floor has great significance on raising the efficiency and quality of product manufacturing. A framework and implementation methods of real-time machining process control based on STEP-NC are presented in this paper. Data model compatible with ISO 14649 standard is built to transfer high-level real-time machining process control information between CAPP systems and CNC systems, in which EXPRESS language is used to define new STEP-NC entities. Methods for implementing real-time machining process control at shop floor are studied and realized on an open STEP-NC controller, which is developed using object-oriented, multithread, and shared memory technologies conjunctively. Cutting force at specific direction of machining feature in side mill is chosen to be controlled object, and a fuzzy control algorithm with self-adjusting factor is designed and embedded in the software CNC kernel of STEP-NC controller. Experiments are carried out to verify the proposed framework, STEP-NC data model, and implementation methods for real-time machining process control. The results of experiments prove that real-time machining process control tasks can be interpreted and executed correctly by the STEP-NC controller at shop floor, in which actual cutting force is kept around ideal value, whether axial cutting depth changes suddenly or continuously.
Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C
2018-02-01
Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Craig Cora L
2011-06-01
Full Text Available Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents. In total, 5949 boys and 5709 girls reported daily steps. BMI was categorized as overweight or obese using Cole's cut points. Participants wore pedometers for 7 days and logged daily steps. The odds of being overweight and obese by steps/day and parent-reported time spent television watching were estimated using logistic regression for complex samples. Results Girls had a lower median steps/day (10682 versus 11059 for boys and also a narrower variation in steps/day (interquartile range, 4410 versus 5309 for boys. 11% of children aged 5-19 years were classified as obese; 17% of boys and girls were overweight. Both boys and girls watched, on average, Discussion Television viewing is the more prominent factor in terms of predicting overweight, and it contributes to obesity, but steps/day attenuates the association between television viewing and obesity, and therefore can be considered protective against obesity. In addition to replacing opportunities for active alternative behaviours, exposure to television might also impact body weight by promoting excess energy intake. Conclusions In this large nationally representative sample, pedometer-determined steps/day was associated with reduced odds of being obese (but not overweight whereas each parent-reported hour spent watching television between school and dinner increased the odds of both overweight and obesity.
On an adaptive time stepping strategy for solving nonlinear diffusion equations
International Nuclear Information System (INIS)
Chen, K.; Baines, M.J.; Sweby, P.K.
1993-01-01
A new time step selection procedure is proposed for solving non- linear diffusion equations. It has been implemented in the ASWR finite element code of Lorenz and Svoboda [10] for 2D semiconductor process modelling diffusion equations. The strategy is based on equi-distributing the local truncation errors of the numerical scheme. The use of B-splines for interpolation (as well as for the trial space) results in a banded and diagonally dominant matrix. The approximate inverse of such a matrix can be provided to a high degree of accuracy by another banded matrix, which in turn can be used to work out the approximate finite difference scheme corresponding to the ASWR finite element method, and further to calculate estimates of the local truncation errors of the numerical scheme. Numerical experiments on six full simulation problems arising in semiconductor process modelling have been carried out. Results show that our proposed strategy is more efficient and better conserves the total mass. 18 refs., 6 figs., 2 tabs
Large area spark counters with fine time and position resolution
International Nuclear Information System (INIS)
Ogawa, A.; Atwood, W.B.; Fujiwara, N.; Pestov, Yu.N.; Sugahara, R.
1983-10-01
Spark counters trace their history back over three decades but have been used in only a limited number of experiments. The key properties of these devices include their capability of precision timing (at the sub 100 ps level) and of measuring the position of the charged particle to high accuracy. At SLAC we have undertaken a program to develop these devices for use in high energy physics experiments involving large detectors. A spark counter of size 1.2 m x 0.1 m has been constructed and has been operating continuously in our test setup for several months. In this talk I will discuss some details of its construction and its properties as a particle detector. 14 references
Large, real time detectors for solar neutrinos and magnetic monopoles
International Nuclear Information System (INIS)
Gonzalez-Mestres, L.
1990-01-01
We discuss the present status of superheated superconducting granules (SSG) development for the real time detection of magnetic monopoles of any speed and of low energy solar neutrinos down to the pp region (indium project). Basic properties of SSG and progress made in the recent years are briefly reviewed. Possible ways for further improvement are discussed. The performances reached in ultrasonic grain production at ∼ 100 μm size, as well as in conventional read-out electronics, look particularly promising for a large scale monopole experiment. Alternative approaches are briefly dealt with: induction loops for magnetic monopoles; scintillators, semiconductors or superconducting tunnel junctions for a solar neutrino detector based on an indium target
Timing of the steps in transformation of C3H 10T1/2 cells by X-irradiation
International Nuclear Information System (INIS)
Kennedy, A.R.; Cairns, J.; Little, J.B.
1984-01-01
Transformation of cells in culture by chemical carcinogens or X-rays seems to require at least two steps. The initial step is a frequent event; for example, after transient exposure to either methylcholanthrene or X-rays. It has been hypothesized that the second step behaves like a spontaneous mutation in having a constant but small probability of occurring each time an initiated cell divides. We show here that the clone size distribution of transformed cells in growing cultures initiated by X-rays, is, indeed, exactly what would be expected on that hypothesis. (author)
De Basabe, Jonás D.
2010-04-01
We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.
Lee, Eun Seok
2000-10-01
An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with
Real-time vibration compensation for large telescopes
Böhm, M.; Pott, J.-U.; Sawodny, O.; Herbst, T.; Kürster, M.
2014-08-01
We compare different strategies for minimizing the effects of telescope vibrations to the differential piston (optical pathway difference) for the Near-InfraRed/Visible Adaptive Camera and INterferometer for Astronomy (LINC-NIRVANA) at the Large Binocular Telescope (LBT) using an accelerometer feedforward compensation approach. We summarize, why this technology is important for LINC-NIRVANA, and also for future telescopes and already existing instruments. The main objective is outlining a solution for the estimation problem in general and its specifics at the LBT. Emphasis is put on realistic evaluation of the used algorithms in the laboratory, such that predictions for the expected performance at the LBT can be made. Model-based estimation and broad-band filtering techniques can be used to solve the estimation task, and the differences are discussed. Simulation results and measurements are shown to motivate our choice of the estimation algorithm for LINC-NIRVANA. The laboratory setup is aimed at imitating the vibration behaviour at the LBT in general, and the M2 as main contributor in particular. For our measurements, we introduce a disturbance time series which has a frequency spectrum comparable to what can be measured at the LBT on a typical night. The controllers' ability to suppress vibrations in the critical frequency range of 8-60 Hz is demonstrated. The experimental results are promising, indicating the ability to suppress differential piston induced by telescope vibrations by a factor of about 5 (rms), which is significantly better than any currently commissioned system.
BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics
Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.
2010-12-01
of both climate and ecosystems must be done at coarse grid resolutions; smaller domains require higher resolution for the simulation of natural resource processes at the landscape scale and that of on-the-ground management practices. Via a combined multi-agency and private conservation effort we have implemented a Nested Scale Experiment (NeScE) that ranges from 1/2 degree resolution (global, ca. 50 km) to ca. 8km (North America) and 800 m (conterminous U.S.). Our first DGVM, MC1, has been implemented at all 3 scales. We are just beginning to implement BIOMAP into NeScE, with its unique features, and daily time step, as a counterpoint to MC1. We believe it will be more accurate at all resolutions providing better simulations of vegetation distribution, carbon balance, runoff, fire regimes and drought impacts.
Uteng, Marianne; Hauge, Håvard Hildeng; Brondz, Ilia; Nissen-Meyer, Jon; Fimland, Gunnar
2002-01-01
A rapid and simple two-step procedure suitable for both small- and large-scale purification of pediocin-like bacteriocins and other cationic peptides has been developed. In the first step, the bacterial culture was applied directly on a cation-exchange column (1-ml cation exchanger per 100-ml cell culture). Bacteria and anionic compounds passed through the column, and cationic bacteriocins were subsequently eluted with 1 M NaCl. In the second step, the bacteriocin fraction was applied on a lo...
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
Directory of Open Access Journals (Sweden)
Kimihiro Hino
2017-12-01
Full Text Available People's year-round interpersonal step count variations according to meteorological conditions are not fully understood, because complete year-round data from a sufficient sample of the general population are difficult to acquire. This study examined the associations between meteorological conditions and objectively measured step counts using year-round data collected from a large cohort (N=24,625 in Yokohama, Japan from April 2015 to March 2016.Two-piece linear regression analysis was used to examine the associations between the monthly median daily step count and three meteorological indices (mean values of temperature, temperature-humidity index (THI, and net effective temperature (NET.The number of steps per day peaked at temperatures between 19.4 and 20.7°C. At lower temperatures, the increase in steps per day was between 46.4 and 52.5 steps per 1°C increase. At temperatures higher than those at which step counts peaked, the decrease in steps per day was between 98.0 and 187.9 per 1°C increase. Furthermore, these effects were more obvious in elderly than non-elderly persons in both sexes. A similar tendency was seen when using THI and NET instead of temperature. Among the three meteorological indices, the highest R2 value with step counts was observed with THI in all four groups.Both high and low meteorological indices discourage people from walking and higher values of the indices adversely affect step count more than lower values, particularly among the elderly. Among the three indices assessed, THI best explains the seasonal fluctuations in step counts. Keywords: Elderly, Developed countries, Health policy, Humidity, Linear regression, Physical activity, Temperature
Directory of Open Access Journals (Sweden)
Tandale Babasaheb V
2008-12-01
Full Text Available Abstract Background Chandipura virus (CHPV, a member of family Rhabdoviridae was attributed to an explosive outbreak of acute encephalitis in children in Andhra Pradesh, India in 2003 and a small outbreak among tribal children from Gujarat, Western India in 2004. The case-fatality rate ranged from 55–75%. Considering the rapid progression of the disease and high mortality, a highly sensitive method for quantifying CHPV RNA by real-time one step reverse transcriptase PCR (real-time one step RT-PCR using TaqMan technology was developed for rapid diagnosis. Methods Primers and probe for P gene were designed and used to standardize real-time one step RT-PCR assay for CHPV RNA quantitation. Standard RNA was prepared by PCR amplification, TA cloning and run off transcription. The optimized real-time one step RT-PCR assay was compared with the diagnostic nested RT-PCR and different virus isolation systems [in vivo (mice in ovo (eggs, in vitro (Vero E6, PS, RD and Sand fly cell line] for the detection of CHPV. Sensitivity and specificity of real-time one step RT-PCR assay was evaluated with diagnostic nested RT-PCR, which is considered as a gold standard. Results Real-time one step RT-PCR was optimized using in vitro transcribed (IVT RNA. Standard curve showed linear relationship for wide range of 102-1010 (r2 = 0.99 with maximum Coefficient of variation (CV = 5.91% for IVT RNA. The newly developed real-time RT-PCR was at par with nested RT-PCR in sensitivity and superior to cell lines and other living systems (embryonated eggs and infant mice used for the isolation of the virus. Detection limit of real-time one step RT-PCR and nested RT-PCR was found to be 1.2 × 100 PFU/ml. RD cells, sand fly cells, infant mice, and embryonated eggs showed almost equal sensitivity (1.2 × 102 PFU/ml. Vero and PS cell-lines (1.2 × 103 PFU/ml were least sensitive to CHPV infection. Specificity of the assay was found to be 100% when RNA from other viruses or healthy
Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.
Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna
2017-11-01
A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Pijnappels, M.A.G.M.; Delbaere, K.; Sturnieks, D.L.; Lord, S.R.
2010-01-01
Background: choice stepping reaction time (CSRT) is a functional measure that has been shown to significantly discriminate older fallers from non-fallers. Objective: to investigate how physiological and cognitive factors mediate the association between CSRT performance and multiple falls by use of
Chu, Chunlei; Stoffa, Paul L.; Seif, Roustam
2009-01-01
We present two Lax‐Wendroff type high‐order time stepping schemes and apply them to solving the 3D elastic wave equation. The proposed schemes have the same format as the Taylor series expansion based schemes, only with modified temporal extrapolation coefficients. We demonstrate by both theoretical analysis and numerical examples that the modified schemes significantly improve the stability conditions.
Iteratively improving Hi-C experiments one step at a time.
Golloshi, Rosela; Sanders, Jacob T; McCord, Rachel Patton
2018-04-30
The 3D organization of eukaryotic chromosomes affects key processes such as gene expression, DNA replication, cell division, and response to DNA damage. The genome-wide chromosome conformation capture (Hi-C) approach can characterize the landscape of 3D genome organization by measuring interaction frequencies between all genomic regions. Hi-C protocol improvements and rapid advances in DNA sequencing power have made Hi-C useful to study diverse biological systems, not only to elucidate the role of 3D genome structure in proper cellular function, but also to characterize genomic rearrangements, assemble new genomes, and consider chromatin interactions as potential biomarkers for diseases. Yet, the Hi-C protocol is still complex and subject to variations at numerous steps that can affect the resulting data. Thus, there is still a need for better understanding and control of factors that contribute to Hi-C experiment success and data quality. Here, we evaluate recently proposed Hi-C protocol modifications as well as often overlooked variables in sample preparation and examine their effects on Hi-C data quality. We examine artifacts that can occur during Hi-C library preparation, including microhomology-based artificial template copying and chimera formation that can add noise to the downstream data. Exploring the mechanisms underlying Hi-C artifacts pinpoints steps that should be further optimized in the future. To improve the utility of Hi-C in characterizing the 3D genome of specialized populations of cells or small samples of primary tissue, we identify steps prone to DNA loss which should be considered to adapt Hi-C to lower cell numbers. Copyright © 2018 Elsevier Inc. All rights reserved.
Seven steps to raise world security. Op-Ed, published in the Finanical Times
International Nuclear Information System (INIS)
ElBaradei, M.
2005-01-01
In recent years, three phenomena have radically altered the security landscape. They are the emergence of a nuclear black market, the determined efforts by more countries to acquire technology to produce the fissile material usable in nuclear weapons and the clear desire of terrorists to acquire weapons of mass destruction. The IAEA has been trying to solve these new problems with existing tools. But for every step forward, we have exposed vulnerabilities in the system. The system itself - the regime that implements non-proliferation treaty needs reinforcement. Some of the necessary remedies can be taken in New York at the Meeting to be held in May, but only if governments are ready to act. With seven straightforward steps, and without amending the treaty, this conference could reach a milestone in strengthening world security. The first step: put a five-year hold on additional facilities for uranium enrichment and plutonium separation. Second, speed up existing efforts, led by the US global threat reduction initiative and others, to modify the research reactors worldwide operating with highly enriched uranium - particularly those with metal fuel that could be readily employed as bomb material. Third, raise the bar for inspection standards by establishing the 'additional protocol' as the norm for verifying compliance with the NPT. Fourth, call on the United Nations Security Council to act swiftly and decisively in the case of any country that withdraws from the NPT, in terms of the threat the withdrawal poses to international peace and security. Fifth, urge states to act on the Security Council's recent resolution 1540, to pursue and prosecute any illicit trading in nuclear material and technology. Sixth, call on the five nuclear weapon states party to the NPT to accelerate implementation of their 'unequivocal commitment' to nuclear disarmament, building on efforts such as the 2002 Moscow treaty between Russia and the US. Last, acknowledge the volatility of
International Nuclear Information System (INIS)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-01
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful
Three-step management of pneumothorax: time for a re-think on initial management†
Kaneda, Hiroyuki; Nakano, Takahito; Taniguchi, Yohei; Saito, Tomohito; Konobu, Toshifumi; Saito, Yukihito
2013-01-01
Pneumothorax is a common disease worldwide, but surprisingly, its initial management remains controversial. There are some published guidelines for the management of spontaneous pneumothorax. However, they differ in some respects, particularly in initial management. In published trials, the objective of treatment has not been clarified and it is not possible to compare the treatment strategies between different trials because of inappropriate evaluations of the air leak. Therefore, there is a need to outline the optimal management strategy for pneumothorax. In this report, we systematically review published randomized controlled trials of the different treatments of primary spontaneous pneumothorax, point out controversial issues and finally propose a three-step strategy for the management of pneumothorax. There are three important characteristics of pneumothorax: potentially lethal respiratory dysfunction; air leak, which is the obvious cause of the disease; frequent recurrence. These three characteristics correspond to the three steps. The central idea of the strategy is that the lung should not be expanded rapidly, unless absolutely necessary. The primary objective of both simple aspiration and chest drainage should be the recovery of acute respiratory dysfunction or the avoidance of respiratory dysfunction and subsequent complications. We believe that this management strategy is simple and clinically relevant and not dependent on the classification of pneumothorax. PMID:23117233
Time and frequency domain analyses of the Hualien Large-Scale Seismic Test
International Nuclear Information System (INIS)
Kabanda, John; Kwon, Oh-Sung; Kwon, Gunup
2015-01-01
Highlights: • Time- and frequency-domain analysis methods are verified against each other. • The two analysis methods are validated against Hualien LSST. • The nonlinear time domain (NLTD) analysis resulted in more realistic response. • The frequency domain (FD) analysis shows amplification at resonant frequencies. • The NLTD analysis requires significant modeling and computing time. - Abstract: In the nuclear industry, the equivalent-linear frequency domain analysis method has been the de facto standard procedure primarily due to the method's computational efficiency. This study explores the feasibility of applying the nonlinear time domain analysis method for the soil–structure-interaction analysis of nuclear power facilities. As a first step, the equivalency of the time and frequency domain analysis methods is verified through a site response analysis of one-dimensional soil, a dynamic impedance analysis of soil–foundation system, and a seismic response analysis of the entire soil–structure system. For the verifications, an idealized elastic soil–structure system is used to minimize variables in the comparison of the two methods. Then, the verified analysis methods are used to develop time and frequency domain models of Hualien Large-Scale Seismic Test. The predicted structural responses are compared against field measurements. The models are also analyzed with an amplified ground motion to evaluate discrepancies of the time and frequency domain analysis methods when the soil–structure system behaves beyond the elastic range. The analysis results show that the equivalent-linear frequency domain analysis method amplifies certain frequency bands and tends to result in higher structural acceleration than the nonlinear time domain analysis method. A comparison with field measurements shows that the nonlinear time domain analysis method better captures the frequency distribution of recorded structural responses than the frequency domain
Modeling Stepped Leaders Using a Time Dependent Multi-dipole Model and High-speed Video Data
Karunarathne, S.; Marshall, T.; Stolzenburg, M.; Warner, T. A.; Orville, R. E.
2012-12-01
In summer of 2011, we collected lightning data with 10 stations of electric field change meters (bandwidth of 0.16 Hz - 2.6 MHz) on and around NASA/Kennedy Space Center (KSC) covering nearly 70 km × 100 km area. We also had a high-speed video (HSV) camera recording 50,000 images per second collocated with one of the electric field change meters. In this presentation we describe our use of these data to model the electric field change caused by stepped leaders. Stepped leaders of a cloud to ground lightning flash typically create the initial path for the first return stroke (RS). Most of the time, stepped leaders have multiple complex branches, and one of these branches will create the ground connection for the RS to start. HSV data acquired with a short focal length lens at ranges of 5-25 km from the flash are useful for obtaining the 2-D location of these multiple branches developing at the same time. Using HSV data along with data from the KSC Lightning Detection and Ranging (LDAR2) system and the Cloud to Ground Lightning Surveillance System (CGLSS), the 3D path of a leader may be estimated. Once the path of a stepped leader is obtained, the time dependent multi-dipole model [ Lu, Winn,and Sonnenfeld, JGR 2011] can be used to match the electric field change at various sensor locations. Based on this model, we will present the time-dependent charge distribution along a leader channel and the total charge transfer during the stepped leader phase.
Uteng, Marianne; Hauge, Håvard Hildeng; Brondz, Ilia; Nissen-Meyer, Jon; Fimland, Gunnar
2002-01-01
A rapid and simple two-step procedure suitable for both small- and large-scale purification of pediocin-like bacteriocins and other cationic peptides has been developed. In the first step, the bacterial culture was applied directly on a cation-exchange column (1-ml cation exchanger per 100-ml cell culture). Bacteria and anionic compounds passed through the column, and cationic bacteriocins were subsequently eluted with 1 M NaCl. In the second step, the bacteriocin fraction was applied on a low-pressure, reverse-phase column and the bacteriocins were detected as major optical density peaks upon elution with propanol. More than 80% of the activity that was initially in the culture supernatant was recovered in both purification steps, and the final bacteriocin preparation was more than 90% pure as judged by analytical reverse-phase chromatography and capillary electrophoresis. PMID:11823243
International Nuclear Information System (INIS)
Kim, Tae Gyoum; Jang, Jin-Tak; Ryu, Hyukhyun; Lee, Won-Jae
2013-01-01
Highlights: •We grew vertical ZnO nanorods on ITO substrate using a two-step continuous potential process. •The nucleation for the ZnO nanorods growth was changed by first-step potential and duration. •The vertical ZnO nanorods were well grown when first-step potential was −1.2 V and 10 s. -- Abstract: In this study, we analyzed the growth of ZnO nanorods on an ITO (indium doped tin oxide) substrate by electrochemical deposition using a two-step, continuous potential process. We examined the effect of changing the first-step potential as well as the first-step duration on the morphological, structural and optical properties of ZnO nanorods, measured via using field emission scanning electron microscopy (FE-SEM), X-ray diffraction (XRD) and photoluminescence (PL), respectively. As a result, vertical ZnO nanorods were grown on ITO substrate without the need for a template when the first-step potential was set to −1.2 V for a duration of 10 s, and the second-step potential was set to −0.7 V for a duration of 1190 s. The ZnO nanorods on this sample showed the highest XRD (0 0 2)/(1 0 0) peak intensity ratio and the highest PL near band edge emission to deep level emission peak intensity ratio (NBE/DLE). In this study, the nucleation for vertical ZnO nanorod growth on an ITO substrate was found to be affected by changes in the first-step potential and first-step duration
A Study of Low-Reynolds Number Effects in Backward-Facing Step Flow Using Large Eddy Simulations
DEFF Research Database (Denmark)
Davidson, Lars; Nielsen, Peter V.
The flow in ventilated rooms is often not fully turbulent, but in some regions the flow can be laminar. Problems have been encountered when simulating this type of flow using RANS (Reynolds Averaged Navier-Stokes) methods. Restivo carried out experiment on the flow after a backward-facing step...
Stability control for approximate implicit time-stepping schemes with minimal residual iterations
Botchev, M.A.; Sleijpen, G.L.G.; Vorst, H.A. van der
1997-01-01
Implicit schemes for the integration of ODE's are popular when stabil- ity is more of concern than accuracy, for instance for the computation of a steady state solution. However, in particular for very large sys- tems the solution of the involved linear systems maybevery expensive. In this
Stability control for approximate implicit timestepping schemes with minimal residual iterations
Botchev, M.A.; Sleijpen, G.L.G.; Vorst, H.A. van der
1997-01-01
Implicit schemes for the integration of ODE's are popular when stabil ity is more of concern than accuracy, for instance for the computation of a steady state solution. However, in particular for very large sys tems the solution of the involved linear systems may be very expensive. In this
Using Variable Dwell Time to Accelerate Gaze-based Web Browsing with Two-step Selection
Chen, Zhaokang; Shi, Bertram E.
2017-01-01
In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose an algorithm for adjusting the dwell times of different objects based on the inferred probability that the user intends to select them. In particular, we introduce a probabilistic model of natural gaze behavior while sur...
Real time simulation of large systems on mini-computer
International Nuclear Information System (INIS)
Nakhle, Michel; Roux, Pierre.
1979-01-01
Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr
Ni, Bing-Jie; Pan, Yuting; van den Akker, Ben; Ye, Liu; Yuan, Zhiguo
2015-08-04
Nitrous oxide (N2O) emission data collected from wastewater treatment plants (WWTPs) show huge variations between plants and within one plant (both spatially and temporarily). Such variations and the relative contributions of various N2O production pathways are not fully understood. This study applied a previously established N2O model incorporating two currently known N2O production pathways by ammonia-oxidizing bacteria (AOB) (namely the AOB denitrification and the hydroxylamine pathways) and the N2O production pathway by heterotrophic denitrifiers to describe and provide insights into the large spatial variations of N2O fluxes in a step-feed full-scale activated sludge plant. The model was calibrated and validated by comparing simulation results with 40 days of N2O emission monitoring data as well as other water quality parameters from the plant. The model demonstrated that the relatively high biomass specific nitrogen loading rate in the Second Step of the reactor was responsible for the much higher N2O fluxes from this section. The results further revealed the AOB denitrification pathway decreased and the NH2OH oxidation pathway increased along the path of both Steps due to the increasing dissolved oxygen concentration. The overall N2O emission from this step-feed WWTP would be largely mitigated if 30% of the returned sludge were returned to the Second Step to reduce its biomass nitrogen loading rate.
Radtke, H.; Burchard, H.
2015-01-01
In this paper, an unconditionally positive and multi-element conserving time stepping scheme for systems of non-linearly coupled ODE's is presented. These systems of ODE's are used to describe biogeochemical transformation processes in marine ecosystem models. The numerical scheme is a positive-definite modification of the Runge-Kutta method, it can have arbitrarily high order of accuracy and does not require time step adaption. If the scheme is combined with a modified Patankar-Runge-Kutta method from Burchard et al. (2003), it also gets the ability to solve a certain class of stiff numerical problems, but the accuracy is restricted to second-order then. The performance of the new scheme on two test case problems is shown.
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Bunce, D; Haynes, BI; Lord, SR; Gschwind, YJ; Kochan, NA; Reppermund, S; Brodaty, H; Sachdev, PS; Delbaere, K
2017-01-01
Background: Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI)...
Energy Technology Data Exchange (ETDEWEB)
Bejeh Mir, Arash Poorsattar [Dentistry Student Research Committee (DSRC), Dental Materials Research Center, Dentistry School, Babol University of Medical Sciences, Babol (Iran, Islamic Republic of); Bejeh Mir, Morvarid Poorsattar [Private Practice of Orthodontics, Montreal, Quebec (Canada)
2012-09-15
ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup (R2=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.
Diamond detector time resolution for large angle tracks
Energy Technology Data Exchange (ETDEWEB)
Chiodini, G., E-mail: chiodini@le.infn.it [INFN - Sezione di Lecce (Italy); Fiore, G.; Perrino, R. [INFN - Sezione di Lecce (Italy); Pinto, C.; Spagnolo, S. [INFN - Sezione di Lecce (Italy); Dip. di Matematica e Fisica “Ennio De Giorgi”, Uni. del Salento (Italy)
2015-10-01
The applications which have stimulated greater interest in diamond sensors are related to detectors close to particle beams, therefore in an environment with high radiation level (beam monitor, luminosity measurement, detection of primary and secondary-interaction vertices). Our aims is to extend the studies performed so far by developing the technical advances needed to prove the competitiveness of this technology in terms of time resolution, with respect to more usual ones, which does not guarantee the required tolerance to a high level of radiation doses. In virtue of these goals, measurements of diamond detector time resolution with tracks incident at different angles are discussed. In particular, preliminary testbeam results obtained with 5 GeV electrons and polycrystalline diamond strip detectors are shown.
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and
Seven Steps to Heaven: Time and Tide in 21st Century Contemporary Music Higher Education
Mitchell, Annie K.
2018-01-01
Throughout the time of my teaching career, the tide has exposed changes in the nature of music, students and music education. This paper discusses teaching and learning in contemporary music at seven critical stages of 21st century music education: i) diverse types of undergraduate learners; ii) teaching traditional classical repertoire and skills…
Bochev, Mikhail A.; Oseledets, I.V.; Tyrtyshnikov, E.E.
2013-01-01
The aim of this paper is two-fold. First, we propose an efficient implementation of the continuous time waveform relaxation method based on block Krylov subspaces. Second, we compare this new implementation against Krylov subspace methods combined with the shift and invert technique.
Minisini, S.; Zhebel, E.; Kononov, A.; Mulder, W.A.
2013-01-01
Modeling and imaging techniques for geophysics are extremely demanding in terms of computational resources. Seismic data attempt to resolve smaller scales and deeper targets in increasingly more complex geologic settings. Finite elements enable accurate simulation of time-dependent wave propagation
Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras
2016-07-01
OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional
Time Alignment as a Necessary Step in the Analysis of Sleep Probabilistic Curves
Rošt'áková, Zuzana; Rosipal, Roman
2018-02-01
Sleep can be characterised as a dynamic process that has a finite set of sleep stages during the night. The standard Rechtschaffen and Kales sleep model produces discrete representation of sleep and does not take into account its dynamic structure. In contrast, the continuous sleep representation provided by the probabilistic sleep model accounts for the dynamics of the sleep process. However, analysis of the sleep probabilistic curves is problematic when time misalignment is present. In this study, we highlight the necessity of curve synchronisation before further analysis. Original and in time aligned sleep probabilistic curves were transformed into a finite dimensional vector space, and their ability to predict subjects' age or daily measures is evaluated. We conclude that curve alignment significantly improves the prediction of the daily measures, especially in the case of the S2-related sleep states or slow wave sleep.
The impact of weight classification on safety: timing steps to adapt to external constraints
Gill, S.V.
2015-01-01
Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all psmetronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658
Off-line real-time FTIR analysis of a process step in imipenem production
Boaz, Jhansi R.; Thomas, Scott M.; Meyerhoffer, Steven M.; Staskiewicz, Steven J.; Lynch, Joseph E.; Egan, Richard S.; Ellison, Dean K.
1992-08-01
We have developed an FT-IR method, using a Spectra-Tech Monit-IR 400 systems, to monitor off-line the completion of a reaction in real-time. The reaction is moisture-sensitive and analysis by more conventional methods (normal-phase HPLC) is difficult to reproduce. The FT-IR method is based on the shift of a diazo band when a conjugated beta-diketone is transformed into a silyl enol ether during the reaction. The reaction mixture is examined directly by IR and does not require sample workup. Data acquisition time is less than one minute. The method has been validated for specificity, precision and accuracy. The results obtained by the FT-IR method for known mixtures and in-process samples compare favorably with those from a normal-phase HPLC method.
Effect of moisture and drying time on the bond strength of the one-step self-etching adhesive system
Directory of Open Access Journals (Sweden)
Yoon Lee
2012-08-01
Full Text Available Objectives To investigate the effect of dentin moisture degree and air-drying time on dentin-bond strength of two different one-step self-etching adhesive systems. Materials and Methods Twenty-four human third molars were used for microtensile bond strength testing of G-Bond and Clearfil S3 Bond. The dentin surface was either blot-dried or air-dried before applying these adhesive agents. After application of the adhesive agent, three different air drying times were evaluated: 1, 5, and 10 sec. Composite resin was build up to 4 mm thickness and light cured for 40 sec with 2 separate layers. Then the tooth was sectioned and trimmed to measure the microtensile bond strength using a universal testing machine. The measured bond strengths were analyzed with three-way ANOVA and regression analysis was done (p = 0.05. Results All three factors, materials, dentin wetness and air drying time, showed significant effect on the microtensile bond strength. Clearfil S3 Bond, dry dentin surface and 10 sec air drying time showed higher bond strength. Conclusions Within the limitation of this experiment, air drying time after the application of the one-step self-etching adhesive agent was the most significant factor affecting the bond strength, followed by the material difference and dentin moisture before applying the adhesive agent.
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of
International Nuclear Information System (INIS)
Finn, John M.
2015-01-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
First steps towards real-time radiography at the NECTAR facility
Bücherl, T.; Wagner, F. M.; v. Gostomski, Ch. Lierse
2009-06-01
The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm -2 s -1 (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.
First steps towards real-time radiography at the NECTAR facility
International Nuclear Information System (INIS)
Buecherl, T.; Wagner, F.M.; Lierse von Gostomski, Ch.
2009-01-01
The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm -2 s -1 (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.
First steps towards real-time radiography at the NECTAR facility
Energy Technology Data Exchange (ETDEWEB)
Buecherl, T. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)], E-mail: thomas.buecherl@radiochemie.de; Wagner, F.M. [Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II), Technische Universitaet Muenchen (Germany); Lierse von Gostomski, Ch. [Lehrstuhl fuer Radiochemie (RCM), Technische Universitaet Muenchen (TUM) (Germany)
2009-06-21
The beam tube SR10 at Forschungsneutronenquelle Heinz Maier-Leibnitz (FRM II) provides an intense beam of fission neutrons for medical application (MEDAPP) and for radiography and tomography of technical and other objects (NECTAR). The high neutron flux of up to 9.8E+07 cm{sup -2} s{sup -1} (depending on filters and collimation) with a mean energy of about 1.9 MeV at the sample position at the NECTAR facility prompted an experimental feasibility study to investigate the potential for real-time (RT) radiography.
A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions
Directory of Open Access Journals (Sweden)
Abdul Rahman Hafiz
2011-01-01
Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.
PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation
Energy Technology Data Exchange (ETDEWEB)
Baker, Robin Ivey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Balestra, Paolo [Univ. of Rome (Italy); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-05-01
A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it using the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings
Time-step selection considerations in the analysis of reactor transients with DIF3D-K
International Nuclear Information System (INIS)
Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.
1993-01-01
The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic, and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. In this paper, we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neuronic-related time-step selection algorithms and their influence on the accuracy and efficiency of the various solution options
Time-step selection considerations in the analysis of reactor transients with DIF3D-K
International Nuclear Information System (INIS)
Taiwo, T.A.; Khalil, H.S.; Cahalan, J.E.; Morris, E.E.
1993-01-01
The DIF3D-K code solves the three-dimensional, time-dependent multigroup neutron diffusion equations by using a nodal approach for spatial discretization and either the theta method or one of three space-time factorization approaches for temporal integration of the nodal equations. The three space-time factorization options (namely, improved quasistatic, adiabatic and conventional point kinetics) were implemented because of their potential efficiency advantage for the analysis of transients in which the flux shape changes more slowly than its amplitude. Here we describe the implementation of DIF3D-K as the neutronics module within the SAS-HWR accident analysis code. We also describe the neutronics-related time step selection algorithms and their influence on the accuracy and efficiency of the various solution options
Altunsoy, Mustafa; Botsali, Murat Selim; Tosun, Gonca; Yasar, Ahmet
2015-10-16
The aim of this study was to evaluate the effect of increased exposure times on the amount of residual Bis-GMA, TEGDMA, HEMA and UDMA released from single-step self-etch adhesive systems. Two adhesive systems were used. The adhesives were applied to bovine dentin surface according to the manufacturer's instructions and were polymerized using an LED curing unit for 10, 20 and 40 seconds (n = 5). After polymerization, the specimens were stored in 75% ethanol-water solution (6 mL). Residual monomers (Bis-GMA, TEGDMA, UDMA and HEMA) that were eluted from the adhesives (after 10 minutes, 1 hour, 1 day, 7 days and 30 days) were analyzed by high-performance liquid chromatography (HPLC). The data were analyzed using 1-way analysis of variance and Tukey HSD tests. Among the time periods, the highest amount of released residual monomers from adhesives was observed in the 10th minute. There were statistically significant differences regarding released Bis-GMA, UDMA, HEMA and TEGDMA between the adhesive systems (p<0.05). There were no significant differences among the 10, 20 and 40 second polymerization times according to their effect on residual monomer release from adhesives (p>0.05). Increasing the polymerization time did not have an effect on residual monomer release from single-step self-etch adhesives.
In the time of significant generational diversity - surgical leadership must step up!
Money, Samuel R; O'Donnell, Mark E; Gray, Richard J
2014-02-01
The diverse attitudes and motivations of surgeons and surgical trainees within different age groups present an important challenge for surgical leaders and educators. These challenges to surgical leadership are not unique, and other industries have likewise needed to grapple with how best to manage these various age groups. The authors will herein explore management and leadership for surgeons in a time of age diversity, define generational variations within "Baby-Boomer", "Generation X" and "Generation Y" populations, and identify work ethos concepts amongst these three groups. The surgical community must understand and embrace these concepts in order to continue to attract a stellar pool of applicants from medical school. By not accepting the changing attitudes and motivations of young trainees and medical students, we may disenfranchise a high percentage of potential future surgeons. Surgical training programs will fill, but will they contain the highest quality trainees? Copyright © 2013 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
Kurz, Ilan; Gimmon, Yoav; Shapiro, Amir; Debi, Ronen; Snir, Yoram; Melzer, Itshak
2016-03-04
Falls are common among elderly, most of them occur while slipping or tripping during walking. We aimed to explore whether a training program that incorporates unexpected loss of balance during walking able to improve risk factors for falls. In a double-blind randomized controlled trial 53 community dwelling older adults (age 80.1±5.6 years), were recruited and randomly allocated to an intervention group (n = 27) or a control group (n = 26). The intervention group received 24 training sessions over 3 months that included unexpected perturbation of balance exercises during treadmill walking. The control group performed treadmill walking with no perturbations. The primary outcome measures were the voluntary step execution times, traditional postural sway parameters and Stabilogram-Diffusion Analysis. The secondary outcome measures were the fall efficacy Scale (FES), self-reported late life function (LLFDI), and Performance-Oriented Mobility Assessment (POMA). Compared to control, participation in intervention program that includes unexpected loss of balance during walking led to faster Voluntary Step Execution Times under single (p = 0.002; effect size [ES] =0.75) and dual task (p = 0.003; [ES] = 0.89) conditions; intervention group subjects showed improvement in Short-term Effective diffusion coefficients in the mediolateral direction of the Stabilogram-Diffusion Analysis under eyes closed conditions (p = 0.012, [ES] = 0.92). Compared to control there were no significant changes in FES, LLFDI, and POMA. An intervention program that includes unexpected loss of balance during walking can improve voluntary stepping times and balance control, both previously reported as risk factors for falls. This however, did not transferred to a change self-reported function and FES. ClinicalTrials.gov NCT01439451 .
Directory of Open Access Journals (Sweden)
Roeland de Kok
2012-08-01
Full Text Available Contrast plays an important role in the visual interpretation of imagery. To mimic visual interpretation and using contrast in a Geographic Object Based Image Analysis (GEOBIA environment, it is useful to consider an analysis for single pixel objects. This should be done before applying homogeneity criteria in the aggregation of pixels for the construction of meaningful image objects. The habit or “best practice” to start GEOBIA with pixel aggregation into homogeneous objects should come with the awareness that feature attributes for single pixels are at risk of becoming less accessible for further analysis. Single pixel contrast with image convolution on close neighborhoods is a standard technique, also applied in edge detection. This study elaborates on the analysis of close as well as much larger neighborhoods inside the GEOBIA domain. The applied calculations are limited to the first segmentation step for single pixel objects in order to produce additional feature attributes for objects of interest to be generated in further aggregation processes. The equation presented functions at a level that is considered an intermediary product in the sequential processing of imagery. The procedure requires intensive processor and memory capacity. The resulting feature attributes highlight not only contrasting pixels (edges but also contrasting areas of local pixel groups. The suggested approach can be extended and becomes useful in classifying artificial areas at national scales using high resolution satellite mosaics.
Pochop, Jaroslav; Kačániová, Miroslava; Hleba, Lukáš; Lopasovský, L'ubomír; Bobková, Alica; Zeleňáková, Lucia; Stričík, Michal
2012-01-01
The aim of this study was to follow contamination of ready-to-eat food with Listeria monocytogenes by using the Step One real time polymerase chain reaction (PCR). We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and MicroSEQ® Listeria monocytogenes Detection Kit for the real-time PCR performance. In 30 samples of ready-to-eat milk and meat products without incubation we detected strains of Listeria monocytogenes in five samples (swabs). Internal positive control (IPC) was positive in all samples. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in ready-to-eat food without incubation.
DEFF Research Database (Denmark)
Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad
2013-01-01
In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao; Street, Robert A.; Lu, Jeng Ping
2014-03-01
The thin-film semiconductor processing methods that enabled creation of inexpensive liquid crystal displays based on amorphous silicon transistors for cell phones and televisions, as well as desktop, laptop and mobile computers, also facilitated the development of devices that have become ubiquitous in medical x-ray imaging environments. These devices, called active matrix flat-panel imagers (AMFPIs), measure the integrated signal generated by incident X rays and offer detection areas as large as ~43×43 cm2. In recent years, there has been growing interest in medical x-ray imagers that record information from X ray photons on an individual basis. However, such photon counting devices have generally been based on crystalline silicon, a material not inherently suited to the cost-effective manufacture of monolithic devices of a size comparable to that of AMFPIs. Motivated by these considerations, we have developed an initial set of small area prototype arrays using thin-film processing methods and polycrystalline silicon transistors. These prototypes were developed in the spirit of exploring the possibility of creating large area arrays offering single photon counting capabilities and, to our knowledge, are the first photon counting arrays fabricated using thin film techniques. In this paper, the architecture of the prototype pixels is presented and considerations that influenced the design of the pixel circuits, including amplifier noise, TFT performance variations, and minimum feature size, are discussed.
Energy Technology Data Exchange (ETDEWEB)
Mather, Barry
2017-08-24
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.
Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R
2017-03-01
Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.
Directory of Open Access Journals (Sweden)
Hamid Reza Fooladmand
2017-06-01
2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region
Effect of different air-drying time on the microleakage of single-step self-etch adhesives
Directory of Open Access Journals (Sweden)
Horieh Moosavi
2013-05-01
Full Text Available Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO, Clearfil S3 Bond (CSB, Bond Force (BF. Each main group divided into three subgroups regarding the air-drying time: without application of air stream, following the manufacturer's instruction, for 10 sec more than manufacturer's instruction. After completion of restorations, specimens were thermocycled and then connected to a fluid filtration system to evaluate microleakage. The data were statistically analyzed using two-way ANOVA and Tukey-test (α = 0.05. Results The microleakage of all adhesives decreased when the air-drying time increased from 0 sec to manufacturer's instruction (p < 0.001. The microleakage of BF reached its lowest values after increasing the drying time to 10 sec more than the manufacturer's instruction (p < 0.001. Microleakage of OBAO and CSB was significantly lower compared to BF in all three drying time (p < 0.001. Conclusions Increasing in air-drying time of adhesive layer in one-step self-etch adhesives caused reduction of microleakage, but the amount of this reduction may be dependent on the adhesive components of self-etch adhesives.
Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection
Directory of Open Access Journals (Sweden)
T. La-inchua
2017-01-01
Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
R. Wu
2016-05-01
Full Text Available Two-dimensional topological insulators with a large bulk band gap are promising for experimental studies of quantum spin Hall effect and for spintronic device applications. Despite considerable theoretical efforts in predicting large-gap two-dimensional topological insulator candidates, none of them have been experimentally demonstrated to have a full gap, which is crucial for quantum spin Hall effect. Here, by combining scanning tunneling microscopy/spectroscopy and angle-resolved photoemission spectroscopy, we reveal that ZrTe_{5} crystal hosts a large full gap of ∼100 meV on the surface and a nearly constant density of states within the entire gap at the monolayer step edge. These features are well reproduced by our first-principles calculations, which point to the topologically nontrivial nature of the edge states.
Lewis, L K; Rowlands, A V; Gardiner, P A; Standage, M; English, C; Olds, T
2016-03-01
This study aimed to evaluate the preliminary effectiveness and feasibility of a theory-informed program to reduce sitting time in older adults. Pre-experimental (pre-post) study. Thirty non-working adult (≥ 60 years) participants attended a one hour face-to-face intervention session and were guided through: a review of their sitting time; normative feedback on sitting time; and setting goals to reduce total sitting time and bouts of prolonged sitting. Participants chose six goals and integrated one per week incrementally for six weeks. Participants received weekly phone calls. Sitting time and bouts of prolonged sitting (≥ 30 min) were measured objectively for seven days (activPAL3c inclinometer) pre- and post-intervention. During these periods, a 24-h time recall instrument was administered by computer-assisted telephone interview. Participants completed a post-intervention project evaluation questionnaire. Paired t tests with sequential Bonferroni corrections and Cohen's d effect sizes were calculated for all outcomes. Twenty-seven participants completed the assessments (71.7 ± 6.5 years). Post-intervention, objectively-measured total sitting time was significantly reduced by 51.5 min per day (p=0.006; d=-0.58) and number of bouts of prolonged sitting by 0.8 per day (p=0.002; d=-0.70). Objectively-measured standing increased by 39 min per day (p=0.006; d=0.58). Participants self-reported spending 96 min less per day sitting (p<0.001; d=-0.77) and 32 min less per day watching television (p=0.005; d=-0.59). Participants were highly satisfied with the program. The 'Small Steps' program is a feasible and promising avenue for behavioral modification to reduce sitting time in older adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Use of a large time-compensated scintillation detector in neutron time-of-flight measurements
International Nuclear Information System (INIS)
Goodman, C.D.
1979-01-01
A scintillator for neutron time-of-flight measurements is positioned at a desired angle with respect to the neutron beam, and as a function of the energy thereof, such that the sum of the transit times of the neutrons and photons in the scintillator are substantially independent of the points of scintillations within the scintillator. Extrapolated zero timing is employed rather than the usual constant fraction timing. As a result, a substantially larger scintillator can be employed that substantially increases the data rate and shortens the experiment time. 3 claims
Biomechanical influences on balance recovery by stepping.
Hsiao, E T; Robinovitch, S N
1999-10-01
Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.
Jia, Hao-Ran; Wang, Hong-Yin; Yu, Zhi-Wu; Chen, Zhan; Wu, Fu-Gen
2016-03-16
Long-time stable plasma membrane imaging is difficult due to the fast cellular internalization of fluorescent dyes and the quick detachment of the dyes from the membrane. In this study, we developed a two-step synergistic cell surface modification and labeling strategy to realize long-time plasma membrane imaging. Initially, a multisite plasma membrane anchoring reagent, glycol chitosan-10% PEG2000 cholesterol-10% biotin (abbreviated as "GC-Chol-Biotin"), was incubated with cells to modify the plasma membranes with biotin groups with the assistance of the membrane anchoring ability of cholesterol moieties. Fluorescein isothiocyanate (FITC)-conjugated avidin was then introduced to achieve the fluorescence-labeled plasma membranes based on the supramolecular recognition between biotin and avidin. This strategy achieved stable plasma membrane imaging for up to 8 h without substantial internalization of the dyes, and avoided the quick fluorescence loss caused by the detachment of dyes from plasma membranes. We have also demonstrated that the imaging performance of our staining strategy far surpassed that of current commercial plasma membrane imaging reagents such as DiD and CellMask. Furthermore, the photodynamic damage of plasma membranes caused by a photosensitizer, Chlorin e6 (Ce6), was tracked in real time for 5 h during continuous laser irradiation. Plasma membrane behaviors including cell shrinkage, membrane blebbing, and plasma membrane vesiculation could be dynamically recorded. Therefore, the imaging strategy developed in this work may provide a novel platform to investigate plasma membrane behaviors over a relatively long time period.
International Nuclear Information System (INIS)
Csom, Gyula; Feher, Sandor; Szieberthj, Mate
2002-01-01
Nowadays the molten salt reactor (MSR) concept seems to revive as one of the most promising systems for the realization of transmutation. In the molten salt reactors and subcritical systems the fuel and material to be transmuted circulate dissolved in some molten salt. The main advantage of this reactor type is the possibility of the continuous feed and reprocessing of the fuel. In the present paper a novel molten salt reactor concept is introduced and its transmutation capabilities are studied. The goal is the development of a transmutation technique along with a device implementing it, which yield higher transmutation efficiencies than that of the known procedures and thus results in radioactive waste whose load on the environment is reduced both in magnitude and time length. The procedure is the multi-step time-scheduled transmutation, in which transformation is done in several consecutive steps of different neutron flux and spectrum. In the new MSR concept, named 'multi-region' MSR (MRMSR), the primary circuit is made up of a few separate loops, in which salt-fuel mixtures of different compositions are circulated. The loop sections constituting the core region are only neutronically and thermally coupled. This new concept makes possible the utilization of the spatial dependence of spectrum as well as the advantageous features of liquid fuel such as the possibility of continuous chemical processing etc. In order to compare a 'conventional' MSR and a proposed MRMSR in terms of efficiency, preliminary calculational results are shown. Further calculations in order to find the optimal implementation of this new concept and to emphasize its other advantageous features are going on. (authors)
Sleep, John; Irving, Malcolm; Burton, Kevin
2005-03-15
The time course of isometric force development following photolytic release of ATP in the presence of Ca(2+) was characterized in single skinned fibres from rabbit psoas muscle. Pre-photolysis force was minimized using apyrase to remove contaminating ATP and ADP. After the initial force rise induced by ATP release, a rapid shortening ramp terminated by a step stretch to the original length was imposed, and the time course of the subsequent force redevelopment was again characterized. Force development after ATP release was accurately described by a lag phase followed by one or two exponential components. At 20 degrees C, the lag was 5.6 +/- 0.4 ms (s.e.m., n = 11), and the force rise was well fitted by a single exponential with rate constant 71 +/- 4 s(-1). Force redevelopment after shortening-restretch began from about half the plateau force level, and its single-exponential rate constant was 68 +/- 3 s(-1), very similar to that following ATP release. When fibres were activated by the addition of Ca(2+) in ATP-containing solution, force developed more slowly, and the rate constant for force redevelopment following shortening-restretch reached a maximum value of 38 +/- 4 s(-1) (n = 6) after about 6 s of activation. This lower value may be associated with progressive sarcomere disorder at elevated temperature. Force development following ATP release was much slower at 5 degrees C than at 20 degrees C. The rate constant of a single-exponential fit to the force rise was 4.3 +/- 0.4 s(-1) (n = 22), and this was again similar to that after shortening-restretch in the same activation at this temperature, 3.8 +/- 0.2 s(-1). We conclude that force development after ATP release and shortening-restretch are controlled by the same steps in the actin-myosin ATPase cycle. The present results and much previous work on mechanical-chemical coupling in muscle can be explained by a kinetic scheme in which force is generated by a rapid conformational change bracketed by two
Computational challenges of large-scale, long-time, first-principles molecular dynamics
International Nuclear Information System (INIS)
Kent, P R C
2008-01-01
Plane wave density functional calculations have traditionally been able to use the largest available supercomputing resources. We analyze the scalability of modern projector-augmented wave implementations to identify the challenges in performing molecular dynamics calculations of large systems containing many thousands of electrons. Benchmark calculations on the Cray XT4 demonstrate that global linear-algebra operations are the primary reason for limited parallel scalability. Plane-wave related operations can be made sufficiently scalable. Improving parallel linear-algebra performance is an essential step to reaching longer timescales in future large-scale molecular dynamics calculations
A time-focusing Fourier chopper time-of-flight diffractometer for large scattering angles
International Nuclear Information System (INIS)
Heinonen, R.; Hiismaeki, P.; Piirto, A.; Poeyry, H.; Tiitta, A.
1975-01-01
A high-resolution time-of-flight diffractometer utilizing time focusing principles in conjunction with a Fourier chopper is under construction at Otaniemi. The design is an improved version of a test facility which has been used for single-crystal and powder diffraction studies with promising results. A polychromatic neutron beam from a radial beam tube of the FiR 1 reactor, collimated to dia. 70 mm, is modulated by a Fourier chopper (dia. 400 mm) which is placed inside a massive boron-loaded particle board shielding of 900 mm wall thickness. A thin flat sample (5 mm x dia. 80 mm typically) is mounted on a turntable at a distance of 4 m from the chopper, and the diffracted neutrons are counted by a scintillation detector at 4 m distance from the sample. The scattering angle 2theta can be chosen between 90deg and 160deg to cover Bragg angles from 45deg up to 80deg. The angle between the chopper disc and the incident beam direction as well as the angle of the detector surface relative to the diffracted beam can be adjusted between 45deg and 90deg in order to accomplish time-focusing. In our set-up, with equal flight paths from chopper to sample and from sample to detector, the time-focusing conditions are fulfilled when the chopper and the detector are parallel to the sample-plane. The time-of-flight spectrum of the scattered neutrons is measured by the reverse time-of-flight method in which, instead of neutrons, one essentially records the modulation function of the chopper during constant periods preceding each detected neutron. With a Fourier chopper whose speed is varied in a suitable way, the method is equivalent to the conventional Fourier method but the spectrum is obtained directly without any off-line calculations. The new diffractometer is operated automatically by a Super Nova computer which not only accumulates the synthetized diffraction pattern but also controls the chopper speed according to the modulation frequency sweep chosen by the user to obtain a
Amplitude and rise time compensated timing optimized for large semiconductor detectors
International Nuclear Information System (INIS)
Kozyczkowski, J.J.; Bialkowski, J.
1976-01-01
The ARC timing described has excellent timing properties even when using a wide range e.g. from 10 keV to over 1 MeV. The detector signal from a preamplifier is accepted directly by the unit as a timing filter amplifier with a sensitivity of 1 mV is incorporated. The adjustable rise time rejection feature makes it possible to achieve a good prompt time spectrum with symmetrical exponential shape down to less than 1/100 of the peak value. A complete block diagram of the unit is given together with results of extensive tests of its performance. For example the time spectrum for (1330+-20) keV of 60 Co taken with a 43 cm 3 Ge(Li) detector has the following parameters: fwhm = 2.2ns, fwtm = 4.4 ns and fw (0.01) m = 7.6 ns and for (50 +- 10) keV of 22 Na the following was obtained: fwhm = 10.8 ns, fwtm = 21.6 ns and fw (0.01) m = 34.6 ns. In another experiment with two fast plastic scintillations (NE 102A) and using a 20% dynamic energy range the following was measured: fwhm = 280 ps, fwtm = 470 ps and fw (0.01) m = 70ps. (Auth.)
Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.
Rogers, Mark W; Mille, Marie-Laure
2016-08-15
Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Directory of Open Access Journals (Sweden)
Eun Seok Lee
2003-01-01
Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.
This presentation, Environmental Exposures and Health Risks in California Child Care Facilities: First Steps to Improve Environmental Health where Children Spend Time, was given at the NIEHS/EPA Children's Centers 2016 Webinar Series: Exposome.
Church, Timothy S
2016-11-01
The analysis plan and article in this issue of the Journal by Evenson et al. (Am J Epidemiol 2016;184(9):621-632) is well-conceived, thoughtfully conducted, and tightly written. The authors utilized the National Health and Nutrition Examination Survey data set to examine the association between accelerometer-measured physical activity level and mortality and found that meeting the 2013 federal Physical Activity Guidelines resulted in a 35% reduction in risk of mortality. The timing of these findings could not be better, given the ubiquitous nature of personal accelerometer devices. The masses are already equipped to routinely quantify their activity, and now we have the opportunity and responsibility to provide evidenced-based, tailored physical activity goals. We have evidenced-based physical activity guidelines, mass distribution of devices to track activity, and now scientific support indicating that meeting the physical activity goal, as assessed by these devices, has substantial health benefits. All of the pieces are in place to make physical inactivity a national priority, and we now have the opportunity to positively affect the health of millions of Americans. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Data warehousing technologies for large-scale and right-time data
DEFF Research Database (Denmark)
Xiufeng, Liu
heterogeneous sources into a central data warehouse (DW) by Extract-Transform-Load (ETL) at regular time intervals, e.g., monthly, weekly, or daily. But now, it becomes challenging for large-scale data, and hard to meet the near real-time/right-time business decisions. This thesis considers some...
Identification of Dobrava, Hantaan, Seoul, and Puumala viruses by one-step real-time RT-PCR.
Aitichou, Mohamed; Saleh, Sharron S; McElroy, Anita K; Schmaljohn, C; Ibrahim, M Sofi
2005-03-01
We developed four assays for specifically identifying Dobrava (DOB), Hantaan (HTN), Puumala (PUU), and Seoul (SEO) viruses. The assays are based on the real-time one-step reverse transcriptase polymerase chain reaction (RT-PCR) with the small segment used as the target sequence. The detection limits of DOB, HTN, PUU, and SEO assays were 25, 25, 25, and 12.5 plaque-forming units, respectively. The assays were evaluated in blinded experiments, each with 100 samples that contained Andes, Black Creek Canal, Crimean-Congo hemorrhagic fever, Rift Valley fever and Sin Nombre viruses in addition to DOB, HTN, PUU and SEO viruses. The sensitivity levels of the DOB, HTN, PUU, and SEO assays were 98%, 96%, 92% and 94%, respectively. The specificity of DOB, HTN and SEO assays was 100% and the specificity of the PUU assay was 98%. Because of the high levels of sensitivity, specificity, and reproducibility, we believe that these assays can be useful for diagnosing and differentiating these four Old-World hantaviruses.
A two-step real-time PCR assay for quantitation and genotyping of human parvovirus 4.
Väisänen, E; Lahtinen, A; Eis-Hübinger, A M; Lappalainen, M; Hedman, K; Söderlund-Venermo, M
2014-01-01
Human parvovirus 4 (PARV4) of the family Parvoviridae was discovered in a plasma sample of a patient with an undiagnosed acute infection in 2005. Currently, three PARV4 genotypes have been identified, however, with an unknown clinical significance. Interestingly, these genotypes seem to differ in epidemiology. In Northern Europe, USA and Asia, genotypes 1 and 2 have been found to occur mainly in persons with a history of injecting drug use or other parenteral exposure. In contrast, genotype 3 appears to be endemic in sub-Saharan Africa, where it infects children and adults without such risk behaviour. In this study, a novel straightforward and cost-efficient molecular assay for both quantitation and genotyping of PARV4 DNA was developed. The two-step method first applies a single-probe pan-PARV4 qPCR for screening and quantitation of this relatively rare virus, and subsequently, only the positive samples undergo a real-time PCR-based multi-probe genotyping. The new qPCR-GT method is highly sensitive and specific regardless of the genotype, and thus being suitable for studying the clinical impact and occurrence of the different PARV4 genotypes. Copyright © 2013 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Daniel Junker
2012-01-01
Full Text Available Objectives. To evaluate prostate cancer (PCa detection rates of real-time elastography (RTE in dependence of tumor size, tumor volume, localization and histological type. Materials and Methods. Thirdy-nine patients with biopsy proven PCa underwent RTE before radical prostatectomy (RPE to assess prostate tissue elasticity, and hard lesions were considered suspicious for PCa. After RPE, the prostates were prepared as whole-mount step sections and were compared with imaging findings for analyzing PCa detection rates. Results. RTE detected 6/62 cancer lesions with a maximum diameter of 0–5 mm (9.7%, 10/37 with a maximum diameter of 6–10 mm (27%, 24/34 with a maximum diameter of 11–20 20 mm (70.6%, 14/14 with a maximum diameter of >20 mm (100% and 40/48 with a volume ≥0.2 cm3 (83.3%. Regarding cancer lesions with a volume ≥ 0.2 cm³ there was a significant difference in PCa detection rates between Gleason scores with predominant Gleason pattern 3 compared to those with predominant Gleason pattern 4 or 5 (75% versus 100%; P=0.028. Conclusions. RTE is able to detect PCa of significant tumor volume and of predominant Gleason pattern 4 or 5 with high confidence, but is of limited value in the detection of small cancer lesions.
Gillet, P; Rapaille, A; Benoît, A; Ceinos, M; Bertrand, O; de Bouyalsky, I; Govaerts, B; Lambermont, M
2015-01-01
Whole blood donation is generally safe although vasovagal reactions can occur (approximately 1%). Risk factors are well known and prevention measures are shown as efficient. This study evaluates the impact of the donor's retention in relation to the occurrence of vasovagal reaction for the first three blood donations. Our study of data collected over three years evaluated the impact of classical risk factors and provided a model including the best combination of covariates predicting VVR. The impact of a reaction at first donation on return rate and complication until the third donation was evaluated. Our data (523,471 donations) confirmed the classical risk factors (gender, age, donor status and relative blood volume). After stepwise variable selection, donor status, relative blood volume and their interaction were the only remaining covariates in the model. Of 33,279 first-time donors monitored over a period of at least 15 months, the first three donations were followed. Data emphasised the impact of complication at first donation. The return rate for a second donation was reduced and the risk of vasovagal reaction was increased at least until the third donation. First-time donation is a crucial step in the donors' career. Donors who experienced a reaction at their first donation have a lower return rate for a second donation and a higher risk of vasovagal reaction at least until the third donation. Prevention measures have to be processed to improve donor retention and provide blood banks with adequate blood supply. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Okubo, Yoshiro; Schoene, Daniel; Lord, Stephen R
2017-04-01
To examine the effects of stepping interventions on fall risk factors and fall incidence in older people. Electronic databases (PubMed, EMBASE, CINAHL, Cochrane, CENTRAL) and reference lists of included articles from inception to March 2015. Randomised (RCT) or clinical controlled trials (CCT) of volitional and reactive stepping interventions that included older (minimum age 60) people providing data on falls or fall risk factors. Meta-analyses of seven RCTs (n=660) showed that the stepping interventions significantly reduced the rate of falls (rate ratio=0.48, 95% CI 0.36 to 0.65, prisk ratio=0.51, 95% CI 0.38 to 0.68, pfalls and proportion of fallers. A meta-analysis of two RCTs (n=62) showed that stepping interventions significantly reduced laboratory-induced falls, and meta-analysis findings of up to five RCTs and CCTs (n=36-416) revealed that stepping interventions significantly improved simple and choice stepping reaction time, single leg stance, timed up and go performance (pfalls among older adults by approximately 50%. This clinically significant reduction may be due to improvements in reaction time, gait, balance and balance recovery but not in strength. Further high-quality studies aimed at maximising the effectiveness and feasibility of stepping interventions are required. CRD42015017357. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Sex ratio and time to pregnancy: analysis of four large European population surveys
DEFF Research Database (Denmark)
Joffe, Mike; Bennett, James; Best, Nicky
2007-01-01
To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies.......To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies....
Kang, Bongmun; Yoon, Ho-Sung
2015-02-01
Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.
Decrease of the tunneling time and violation of the Hartman effect for large barriers
International Nuclear Information System (INIS)
Olkhovsky, V.S.; Zaichenko, A.K.; Petrillo, V.
2004-01-01
The explicit formulation of the initial conditions of the definition of the wave-packet tunneling time is proposed. This formulation takes adequately into account the irreversibility of the wave-packet space-time spreading. Moreover, it explains the violations of the Hartman effect, leading to a strong decrease of the tunneling times up to negative values for wave packets with large momentum spreads due to strong wave-packet time spreading
International Nuclear Information System (INIS)
Aboanber, A.E.; Hamada, Y.M.
2008-01-01
An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods
Fisher, Abi; Ucci, Marcella; Smith, Lee; Sawyer, Alexia; Spinney, Richard; Konstantatou, Marina; Marmot, Alexi
2018-06-01
Office-based workers spend a large proportion of the day sitting and tend to have low overall activity levels. Despite some evidence that features of the external physical environment are associated with physical activity, little is known about the influence of the spatial layout of the internal environment on movement, and the majority of data use self-report. This study investigated associations between objectively-measured sitting time and activity levels and the spatial layout of office floors in a sample of UK office-based workers. Participants wore activPAL accelerometers for at least three consecutive workdays. Primary outcomes were steps and proportion of sitting time per working hour. Primary exposures were office spatial layout, which was objectively-measured by deriving key spatial variables: 'distance from each workstation to key office destinations', 'distance from participant's workstation to all other workstations', 'visibility of co-workers', and workstation 'closeness'. 131 participants from 10 organisations were included. Fifty-four per cent were female, 81% were white, and the majority had a managerial or professional role (72%) in their organisation. The average proportion of the working hour spent sitting was 0.7 (SD 0.15); participants took on average 444 (SD 210) steps per working hour. Models adjusted for confounders revealed significant negative associations between step count and distance from each workstation to all other office destinations (e.g., B = -4.66, 95% CI: -8.12, -1.12, p office destinations (e.g., B = -6.45, 95% CI: -11.88, -0.41, p office destinations the less they walked, suggesting that changing the relative distance between workstations and other destinations on the same floor may not be the most fruitful target for promoting walking and reducing sitting in the workplace. However, reported effect sizes were very small and based on cross-sectional analyses. The approaches developed in this study could be applied to other
Melzer, I; Krasovsky, T; Oddsson, L I E; Liebermann, D G
2010-12-01
This study investigated the force-time relationship during the push-off stage of a rapid voluntary step in young and older healthy adults, to study the assumption that when balance is lost a quick step may preserve stability. The ability to achieve peak propulsive force within a short time is critical for the performance of such a quick powerful step. We hypothesized that older adults would achieve peak force and power in significantly longer times compared to young people, particularly during the push-off preparatory phase. Fifteen young and 15 older volunteers performed rapid forward steps while standing on a force platform. Absolute anteroposterior and body weight normalized vertical forces during the push-off in the preparation and swing phases were used to determine time to peak and peak force, and step power. Two-way analyses of variance ('Group' [young-older] by 'Phase' [preparation-swing]) were used to assess our hypothesis (P ≤ 0.05). Older people exerted lower peak forces (anteroposterior and vertical) than young adults, but not necessarily lower peak power. More significantly, they showed a longer time to peak force, particularly in the vertical direction during the preparation phase. Older adults generate propulsive forces slowly and reach lower magnitudes, mainly during step preparation. The time to achieve a peak force and power, rather than its actual magnitude, may account for failures in quickly performing a preventive action. Such delay may be associated with the inability to react and recruit muscles quickly. Thus, training elderly to step fast in response to relevant cues may be beneficial in the prevention of falls. Copyright © 2010 Elsevier Ltd. All rights reserved.
Time dispersion in large plastic scintillation neutron detector [Paper No.:B3
International Nuclear Information System (INIS)
De, A.; Dasgupta, S.S.; Sen, D.
1993-01-01
Time dispersion seen by photomultiplier (PM) tube in large plastic scintillation neutron detector and the light collection mechanism by the same have been computed showing that this time dispersion (TD) seen by the PM tube does not necessarily increase with increasing incident neutron energy in contrast to the usual finding that TD increases with increasing energy. (author). 8 refs., 4 figs
Korir, Peter C.; Dejene, Francis B.
2018-04-01
In this work two step growth process was used to prepare Cu(In, Ga)Se2 thin film for solar cell applications. The first step involves deposition of Cu-In-Ga precursor films followed by the selenization process under vacuum using elemental selenium vapor to form Cu(In,Ga)Se2 film. The growth process was done at a fixed temperature of 515 °C for 45, 60 and 90 min to control film thickness and gallium incorporation into the absorber layer film. The X-ray diffraction (XRD) pattern confirms single-phase Cu(In,Ga)Se2 film for all the three samples and no secondary phases were observed. A shift in the diffraction peaks to higher 2θ (2 theta) values is observed for the thin films compared to that of pure CuInSe2. The surface morphology of the resulting film grown for 60 min was characterized by the presence of uniform large grain size particles, which are typical for device quality material. Photoluminescence spectra show the shifting of emission peaks to higher energies for longer duration of selenization attributed to the incorporation of more gallium into the CuInSe2 crystal structure. Electron probe microanalysis (EPMA) revealed a uniform distribution of the elements through the surface of the film. The elemental ratio of Cu/(In + Ga) and Se/Cu + In + Ga strongly depends on the selenization time. The Cu/In + Ga ratio for the 60 min film is 0.88 which is in the range of the values (0.75-0.98) for best solar cell device performances.
International Nuclear Information System (INIS)
Omel'yanov, G.A.
1995-07-01
The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs
Directory of Open Access Journals (Sweden)
Shichao Sun
2015-01-01
Full Text Available This paper addressed the vehicle routing problem (VRP in large-scale urban transportation networks with stochastic time-dependent (STD travel times. The subproblem which is how to find the optimal path connecting any pair of customer nodes in a STD network was solved through a robust approach without requiring the probability distributions of link travel times. Based on that, the proposed STD-VRP model can be converted into solving a normal time-dependent VRP (TD-VRP, and algorithms for such TD-VRPs can also be introduced to obtain the solution. Numerical experiments were conducted to address STD-VRPTW of practical sizes on a real world urban network, demonstrated here on the road network of Shenzhen, China. The stochastic time-dependent link travel times of the network were calibrated by historical floating car data. A route construction algorithm was applied to solve the STD problem in 4 delivery scenarios efficiently. The computational results showed that the proposed STD-VRPTW model can improve the level of customer service by satisfying the time-window constraint under any circumstances. The improvement can be very significant especially for large-scale network delivery tasks with no more increase in cost and environmental impacts.
High resolution time-of-flight measurements in small and large scintillation counters
International Nuclear Information System (INIS)
D'Agostini, G.; Marini, G.; Martellotti, G.; Massa, F.; Rambaldi, A.; Sciubba, A.
1981-01-01
In a test run, the experimental time-of-flight resolution was measured for several different scintillation counters of small (10 x 5 cm 2 ) and large (100 x 15 cm 2 and 75 x 25 cm 2 ) area. The design characteristics were decided on the basis of theoretical Monte Carlo calculations. We report results using twisted, fish-tail, and rectangular light- guides and different types of scintillator (NE 114 and PILOT U). Time resolution up to approx. equal to 130-150 ps fwhm for the small counters and up to approx. equal to 280-300 ps fwhm for the large counters were obtained. The spatial resolution from time measurements in the large counters is also reported. The results of Monte Carlo calculations on the type of scintillator, the shape and dimensions of the light-guides, and the nature of the external wrapping surfaces - to be used in order to optimize the time resolution - are also summarized. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Salimi, S; Radgohar, R, E-mail: shsalimi@uok.ac.i, E-mail: r.radgohar@uok.ac.i [Faculty of Science, Department of Physics, University of Kurdistan, Pasdaran Ave, Sanandaj (Iran, Islamic Republic of)
2010-01-28
In this paper, we consider decoherence in continuous-time quantum walks on long-range interacting cycles (LRICs), which are the extensions of the cycle graphs. For this purpose, we use Gurvitz's model and assume that every node is monitored by the corresponding point-contact induced by the decoherence process. Then, we focus on large rates of decoherence and calculate the probability distribution analytically and obtain the lower and upper bounds of the mixing time. Our results prove that the mixing time is proportional to the rate of decoherence and the inverse of the square of the distance parameter (m). This shows that the mixing time decreases with increasing range of interaction. Also, what we obtain for m = 0 is in agreement with Fedichkin, Solenov and Tamon's results [48] for cycle, and we see that the mixing time of CTQWs on cycle improves with adding interacting edges.
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing
2017-05-01
We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the
Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R
2006-12-15
We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.
A Short Proof of the Large Time Energy Growth for the Boussinesq System
Brandolese, Lorenzo; Mouzouni, Charafeddine
2017-10-01
We give a direct proof of the fact that the L^p-norms of global solutions of the Boussinesq system in R^3 grow large as t→ ∞ for 1R+× R3. In particular, the kinetic energy blows up as \\Vert u(t)\\Vert _2^2˜ ct^{1/2} for large time. This contrasts with the case of the Navier-Stokes equations.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
He, Zi; Chen, Ru-Shan
2016-03-01
An efficient three-dimensional time domain parabolic equation (TDPE) method is proposed to fast analyze the narrow-angle wideband EM scattering properties of electrically large targets. The finite difference (FD) of Crank-Nicolson (CN) scheme is used as the traditional tool to solve the time-domain parabolic equation. However, a huge computational resource is required when the meshes become dense. Therefore, the alternating direction implicit (ADI) scheme is introduced to discretize the time-domain parabolic equation. In this way, the reduced transient scattered fields can be calculated line by line in each transverse plane for any time step with unconditional stability. As a result, less computational resources are required for the proposed ADI-based TDPE method when compared with both the traditional CN-based TDPE method and the finite-different time-domain (FDTD) method. By employing the rotating TDPE method, the complete bistatic RCS can be obtained with encouraging accuracy for any observed angle. Numerical examples are given to demonstrate the accuracy and efficiency of the proposed method.
Hauff, F.; Hoernle, K.; Tilton, G.; Graham, D. W.; Kerr, A. C.
2000-01-01
Oceanic flood basalts are poorly understood, short-term expressions of highly increased heat flux and mass flow within the convecting mantle. The uniqueness of the Caribbean Large Igneous Province (CLIP, 92-74 Ma) with respect to other Cretaceous oceanic plateaus is its extensive sub-aerial exposures, providing an excellent basis to investigate the temporal and compositional relationships within a starting plume head. We present major element, trace element and initial Sr-Nd-Pb isotope composition of 40 extrusive rocks from the Caribbean Plateau, including onland sections in Costa Rica, Colombia and Curaçao as well as DSDP Sites in the Central Caribbean. Even though the lavas were erupted over an area of ˜3×10 6 km 2, the majority have strikingly uniform incompatible element patterns (La/Yb=0.96±0.16, n=64 out of 79 samples, 2σ) and initial Nd-Pb isotopic compositions (e.g. 143Nd/ 144Nd in=0.51291±3, ɛNdi=7.3±0.6, 206Pb/ 204Pb in=18.86±0.12, n=54 out of 66, 2σ). Lavas with endmember compositions have only been sampled at the DSDP Sites, Gorgona Island (Colombia) and the 65-60 Ma accreted Quepos and Osa igneous complexes (Costa Rica) of the subsequent hotspot track. Despite the relatively uniform composition of most lavas, linear correlations exist between isotope ratios and between isotope and highly incompatible trace element ratios. The Sr-Nd-Pb isotope and trace element signatures of the chemically enriched lavas are compatible with derivation from recycled oceanic crust, while the depleted lavas are derived from a highly residual source. This source could represent either oceanic lithospheric mantle left after ocean crust formation or gabbros with interlayered ultramafic cumulates of the lower oceanic crust. High 3He/ 4He in olivines of enriched picrites at Quepos are ˜12 times higher than the atmospheric ratio suggesting that the enriched component may have once resided in the lower mantle. Evaluation of the Sm-Nd and U-Pb isotope systematics on
A robust and high-performance queue management controller for large round trip time networks
Khoshnevisan, Ladan; Salmasi, Farzad R.
2016-05-01
Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.
Directory of Open Access Journals (Sweden)
Jin Wang
2017-03-01
Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.
CAN LARGE TIME DELAYS OBSERVED IN LIGHT CURVES OF CORONAL LOOPS BE EXPLAINED IN IMPULSIVE HEATING?
International Nuclear Information System (INIS)
Lionello, Roberto; Linker, Jon A.; Mikić, Zoran; Alexander, Caroline E.; Winebarger, Amy R.
2016-01-01
The light curves of solar coronal loops often peak first in channels associated with higher temperatures and then in those associated with lower temperatures. The delay times between the different narrowband EUV channels have been measured for many individual loops and recently for every pixel of an active region observation. The time delays between channels for an active region exhibit a wide range of values. The maximum time delay in each channel pair can be quite large, i.e., >5000 s. These large time delays make-up 3%–26% (depending on the channel pair) of the pixels where a trustworthy, positive time delay is measured. It has been suggested that these time delays can be explained by simple impulsive heating, i.e., a short burst of energy that heats the plasma to a high temperature, after which the plasma is allowed to cool through radiation and conduction back to its original state. In this paper, we investigate whether the largest observed time delays can be explained by this hypothesis by simulating a series of coronal loops with different heating rates, loop lengths, abundances, and geometries to determine the range of expected time delays between a set of four EUV channels. We find that impulsive heating cannot address the largest time delays observed in two of the channel pairs and that the majority of the large time delays can only be explained by long, expanding loops with photospheric abundances. Additional observations may rule out these simulations as an explanation for the long time delays. We suggest that either the time delays found in this manner may not be representative of real loop evolution, or that the impulsive heating and cooling scenario may be too simple to explain the observations, and other potential heating scenarios must be explored
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
1997-01-01
This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
2002-01-01
This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g
Computing the real-time Green's Functions of large Hamiltonian matrices
Iitaka, Toshiaki
1998-01-01
A numerical method is developed for calculating the real time Green's functions of very large sparse Hamiltonian matrices, which exploits the numerical solution of the inhomogeneous time-dependent Schroedinger equation. The method has a clear-cut structure reflecting the most naive definition of the Green's functions, and is very suitable to parallel and vector supercomputers. The effectiveness of the method is illustrated by applying it to simple lattice models. An application of this method...
Calculation of neutron die-away times in a large-vehicle portal monitor
International Nuclear Information System (INIS)
Lillie, R.A.; Santoro, R.T.; Alsmiller, R.G. Jr.
1980-05-01
Monte Carlo methods have been used to calculate neutron die-away times in a large-vehicle portal monitor. These calculations were performed to investigate the adequacy of using neutron die-away time measurements to detect the clandestine movement of shielded nuclear materials. The geometry consisted of a large tunnel lined with He 3 proportional counters. The time behavior of the (n,p) capture reaction in these counters was calculated when the tunnel contained a number of different tractor-trailer load configurations. Neutron die-away times obtained from weighted least squares fits to these data were compared. The change in neutron die-away time due to the replacement of cargo in a fully loaded truck with a spherical shell containing 240 kg of borated polyethylene was calculated to be less than 3%. This result together with the overall behavior of neutron die-away time versus mass inside the tunnel strongly suggested that measurements of this type will not provide a reliable means of detecting shielded nuclear materials in a large vehicle. 5 figures, 4 tables
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger
2017-01-01
Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...
AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger
2017-01-01
Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...
Large-time behavior of solutions to a reaction-diffusion system with distributed microstructure
Muntean, A.
2009-01-01
Abstract We study the large-time behavior of a class of reaction-diffusion systems with constant distributed microstructure arising when modeling diffusion and reaction in structured porous media. The main result of this Note is the following: As t ¿ 8 the macroscopic concentration vanishes, while
The LOFT (Large Observatory for X-ray Timing) background simulations
DEFF Research Database (Denmark)
Campana, R.; Feroci, M.; Del Monte, E.
2012-01-01
The Large Observatory For X-ray Timing (LOFT) is an innovative medium-class mission selected for an assessment phase in the framework of the ESA M3 Cosmic Vision call. LOFT is intended to answer fundamental questions about the behavior of matter in theh very strong gravitational and magnetic fields...
An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files
Directory of Open Access Journals (Sweden)
Anthony Chan
2008-01-01
Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.
RankExplorer: Visualization of Ranking Changes in Large Time Series Data.
Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin
2012-12-01
For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.
de Koning, S.; Kaal, E.; Janssen, H.-G.; van Platerink, C.; Brinkman, U.A.Th.
2008-01-01
The feasibility of a versatile system for multi-step direct thermal desorption (DTD) coupled to comprehensive gas chromatography (GC × GC) with time-of-flight mass spectrometric (TOF-MS) detection is studied. As an application the system is used for the characterization of fresh versus aged olive
Kvelde, T.; Pijnappels, M.A.G.M.; Delbaere, K.; Close, J.C.; Lord, S.R.
2010-01-01
Background. The aim of the study was to use path analysis to test a theoretical model proposing that the relationship between self-reported depressed mood and choice stepping reaction time (CSRT) is mediated by psychoactive medication use, physiological performance, and cognitive ability.A total of
Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)
Crowell, B. W.; Bock, Y.; Squibb, M. B.
2010-12-01
Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.
International Nuclear Information System (INIS)
Fowler, Jack F.; Limbergen, Erik F.M. van
1997-01-01
Purpose: To explore the possible increase of radiation effect in tissues irradiated by pulsed brachytherapy (PDR) for local tissue dose rates between those 'averaged over the whole pulse' and the instantaneous high dose rates close to the dwell positions. Increased effect is more likely for tissues with short half-times of repair of the order of a few minutes, similar to pulse durations. Methods and Materials: Calculations were done assuming the linear quadratic formula for radiation damage, in which only the dose-squared term is subject to exponential repair. The situation with two components of T (1(2)) is addressed. A constant overall time of 140 h and a constant total dose of 70 Gy were assumed throughout, the continuous low dose rate of 0.5 Gy/h (CLDR) providing the unitary standard effects for each PDR condition. Effects of dose rates ranging from 4 Gy/h to 120 Gy/h (HDR at 2 Gy/min) were studied, covering the gap in an earlier publication. Four schedules were examined: doses per pulse of 0.5, 1, 1.5, and 2 Gy given at repetition frequencies of 1, 2, 3, and 4 h, respectively, each with a range of assumed half-times of repair of 4 min to 1.5 h. Results are presented for late-responding tissues, the differences from CLDR being two or three times greater than for early-responding tissues and most tumors. Results: Curves are presented relating the ratio of increased biological effect (proportional to log cell kill) calculated for PDR relative to CLDR. Ratios as high as 1.5 can be found for large doses per pulse (2 Gy) if the half-time of repair in tissues is as short as a few minutes. The major influences on effect are dose per pulse, half-time of repair in tissue, and--when T (1(2)) is short--the instantaneous dose rate. Maximum ratios of PDR/CLDR occur when the dose rate is such that pulse duration is approximately equal to T (1(2)) . As dose rate in the pulse is increased, a plateau of effect is reached, for most T (1(2)) s, above 10 to 20 Gy/h, which is
Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P
2017-09-01
This study aims to assess the impact of off-campus facility expansion by a large academic health system on patient travel times for screening mammography. Screening mammograms performed from 2013 to 2015 and associated patient demographics were identified using the NYU Langone Medical Center Enterprise Data Warehouse. During this time, the system's number of mammography facilities increased from 6 to 19, reflecting expansion beyond Manhattan throughout the New York metropolitan region. Geocoding software was used to estimate driving times from patients' homes to imaging facilities. For 147,566 screening mammograms, the mean estimated patient travel time was 19.9 ± 15.2 minutes. With facility expansion, travel times declined significantly (P travel times between such subgroups. However, travel times to pre-expansion facilities remained stable (initial: 26.8 ± 18.9 minutes, final: 26.7 ± 18.6 minutes). Among women undergoing mammography before and after expansion, travel times were shorter for the postexpansion mammogram in only 6.3%, but this rate varied significantly (all P travel burden and reduce travel time variation among sociodemographic populations. Nonetheless, existing patients strongly tend to return to established facilities despite potentially shorter travel time locations, suggesting strong site loyalty. Variation in travel times likely relates to various factors other than facility proximity. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Oh, Suhk Kun [Chungbuk National University, Chungbuk (Korea, Republic of)
2006-01-15
As an extension of our previous work on the relationship between time in Monte Carlo simulation and time in the continuous master equation in the infinit-range Glauber kinetic Ising model in the absence of any magnetic field, we explored the same model in the presence of a static magnetic field. Monte Carlo steps per spin as time in the MC simulations again turns out to be proportional to time in the master equation for the model in relatively larger static magnetic fields at any temperature. At and near the critical point in a relatively smaller magnetic field, the model exhibits a significant finite-size dependence, and the solution to the Suzuki-Kubo differential equation stemming from the master equation needs to be re-scaled to fit the Monte Carlo steps per spin for the system with different numbers of spins.
Time delay effects on large-scale MR damper based semi-active control strategies
International Nuclear Information System (INIS)
Cha, Y-J; Agrawal, A K; Dyke, S J
2013-01-01
This paper presents a detailed investigation on the robustness of large-scale 200 kN MR damper based semi-active control strategies in the presence of time delays in the control system. Although the effects of time delay on stability and performance degradation of an actively controlled system have been investigated extensively by many researchers, degradation in the performance of semi-active systems due to time delay has yet to be investigated. Since semi-active systems are inherently stable, instability problems due to time delay are unlikely to arise. This paper investigates the effects of time delay on the performance of a building with a large-scale MR damper, using numerical simulations of near- and far-field earthquakes. The MR damper is considered to be controlled by four different semi-active control algorithms, namely (i) clipped-optimal control (COC), (ii) decentralized output feedback polynomial control (DOFPC), (iii) Lyapunov control, and (iv) simple-passive control (SPC). It is observed that all controllers except for the COC are significantly robust with respect to time delay. On the other hand, the clipped-optimal controller should be integrated with a compensator to improve the performance in the presence of time delay. (paper)
TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES
International Nuclear Information System (INIS)
Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.
2011-01-01
Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.
Modeling and control of a large nuclear reactor. A three-time-scale approach
Energy Technology Data Exchange (ETDEWEB)
Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering
2013-07-01
Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.
Tudor-Locke, Catrine; Craig, Cora L; Cameron, Christine; Griffiths, Joseph M
2011-01-01
Abstract Background This study examines associations between pedometer-determined steps/day and parent-reported child's Body Mass Index (BMI) and time typically spent watching television between school and dinner. Methods Young people (aged 5-19 years) were recruited through their parents by random digit dialling and mailed a data collection package. Information on height and weight and time spent watching television between school and dinner on a typical school day was collected from parents...
Time-Sliced Perturbation Theory for Large Scale Structure I: General Formalism
Blas, Diego; Ivanov, Mikhail M.; Sibiryakov, Sergey
2016-01-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein--de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This pave...
Large deviations of a long-time average in the Ehrenfest urn model
Meerson, Baruch; Zilber, Pini
2018-05-01
Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .
Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach
Shimjith, S R; Bandyopadhyay, B
2013-01-01
Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...
DEFF Research Database (Denmark)
Nielsen, A. C. Y.; Bottiger, B.; Midgley, S. E.
2013-01-01
As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay....... The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel...... testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses...
Asymptotics for Large Time of Global Solutions to the Generalized Kadomtsev-Petviashvili Equation
Hayashi, Nakao; Naumkin, Pavel I.; Saut, Jean-Claude
We study the large time asymptotic behavior of solutions to the generalized Kadomtsev-Petviashvili (KP) equations where σ= 1 or σ=- 1. When ρ= 2 and σ=- 1, (KP) is known as the KPI equation, while ρ= 2, σ=+ 1 corresponds to the KPII equation. The KP equation models the propagation along the x-axis of nonlinear dispersive long waves on the surface of a fluid, when the variation along the y-axis proceeds slowly [10]. The case ρ= 3, σ=- 1 has been found in the modeling of sound waves in antiferromagnetics [15]. We prove that if ρ>= 3 is an integer and the initial data are sufficiently small, then the solution u of (KP) satisfies the following estimates: for all t∈R, where κ= 1 if ρ= 3 and κ= 0 if ρ>= 4. We also find the large time asymptotics for the solution.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
Wealth Transfers Among Large Customers from Implementing Real-Time Retail Electricity Pricing
Borenstein, Severin
2007-01-01
Adoption of real-time electricity pricing — retail prices that vary hourly to reflect changing wholesale prices — removes existing cross-subsidies to those customers that consume disproportionately more when wholesale prices are highest. If their losses are substantial, these customers are likely to oppose RTP initiatives unless there is a supplemental program to offset their loss. Using data on a sample of 1142 large industrial and commercial customers in northern California, I show that RTP...
Gulati, Sankalp; Serrà, Joan; Ishwar, Vignesh; Serra, Xavier
2014-01-01
We demonstrate a data-driven unsupervised approach for the discovery of melodic patterns in large collections of Indian art music recordings. The approach first works on single recordings and subsequently searches in the entire music collection. Melodic similarity is based on dynamic time warping. The task being computationally intensive, lower bounding and early abandoning techniques are applied during distance computation. Our dataset comprises 365 hours of music, containing 1,764 audio rec...
Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan
2018-06-01
We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.
Large lateral photovoltaic effect with ultrafast relaxation time in SnSe/Si junction
Energy Technology Data Exchange (ETDEWEB)
Wang, Xianjie; Zhao, Xiaofeng; Hu, Chang; Zhang, Yang; Song, Bingqian; Zhang, Lingli; Liu, Weilong; Lv, Zhe; Zhang, Yu; Sui, Yu, E-mail: suiyu@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Tang, Jinke [Department of Physics and Astronomy, University of Wyoming, Laramie, Wyoming 82071 (United States); Song, Bo, E-mail: songbo@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Academy of Fundamental and Interdisciplinary Sciences, Harbin Institute of Technology, Harbin 150001 (China)
2016-07-11
In this paper, we report a large lateral photovoltaic effect (LPE) with ultrafast relaxation time in SnSe/p-Si junctions. The LPE shows a linear dependence on the position of the laser spot, and the position sensitivity is as high as 250 mV mm{sup −1}. The optical response time and the relaxation time of the LPE are about 100 ns and 2 μs, respectively. The current-voltage curve on the surface of the SnSe film indicates the formation of an inversion layer at the SnSe/p-Si interface. Our results clearly suggest that most of the excited-electrons diffuse laterally in the inversion layer at the SnSe/p-Si interface, which results in a large LPE with ultrafast relaxation time. The high positional sensitivity and ultrafast relaxation time of the LPE make the SnSe/p-Si junction a promising candidate for a wide range of optoelectronic applications.
Time-sliced perturbation theory for large scale structure I: general formalism
Energy Technology Data Exchange (ETDEWEB)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey [Theory Division, CERN, CH-1211 Genève 23 (Switzerland); Ivanov, Mikhail M., E-mail: diego.blas@cern.ch, E-mail: mathias.garny@cern.ch, E-mail: mikhail.ivanov@cern.ch, E-mail: sergey.sibiryakov@cern.ch [FSB/ITP/LPPC, École Polytechnique Fédérale de Lausanne, CH-1015, Lausanne (Switzerland)
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.
Incipient multiple fault diagnosis in real time with applications to large-scale systems
International Nuclear Information System (INIS)
Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.
1994-01-01
By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner
STeP: A Tool for the Development of Provably Correct Reactive and Real-Time Systems
National Research Council Canada - National Science Library
Manna, Zohar
1999-01-01
This research is directed towards the implementation of a comprehensive toolkit for the development and verification of high assurance reactive systems, especially concurrent, real time, and hybrid systems...
Nielsen, Alex Christian Yde; Böttiger, Blenda; Midgley, Sofie Elisabeth; Nielsen, Lars Peter
2013-11-01
As the number of new enteroviruses and human parechoviruses seems ever growing, the necessity for updated diagnostics is relevant. We have updated an enterovirus assay and combined it with a previously published assay for human parechovirus resulting in a multiplex one-step RT-PCR assay. The multiplex assay was validated by analysing the sensitivity and specificity of the assay compared to the respective monoplex assays, and a good concordance was found. Furthermore, the enterovirus assay was able to detect 42 reference strains from all 4 species, and an additional 9 genotypes during panel testing and routine usage. During 15 months of routine use, from October 2008 to December 2009, we received and analysed 2187 samples (stool samples, cerebrospinal fluids, blood samples, respiratory samples and autopsy samples) were tested, from 1546 patients and detected enteroviruses and parechoviruses in 171 (8%) and 66 (3%) of the samples, respectively. 180 of the positive samples could be genotyped by PCR and sequencing and the most common genotypes found were human parechovirus type 3, echovirus 9, enterovirus 71, Coxsackievirus A16, and echovirus 25. During 2009 in Denmark, both enterovirus and human parechovirus type 3 had a similar seasonal pattern with a peak during the summer and autumn. Human parechovirus type 3 was almost invariably found in children less than 4 months of age. In conclusion, a multiplex assay was developed allowing simultaneous detection of 2 viruses, which can cause similar clinical symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.
Effect of different air-drying time on the microleakage of single-step self-etch adhesives
Moosavi, Horieh; Forghani, Maryam; Managhebi, Esmatsadat
2013-01-01
Objectives This study evaluated the effect of three different air-drying times on microleakage of three self-etch adhesive systems. Materials and Methods Class I cavities were prepared for 108 extracted sound human premolars. The teeth were divided into three main groups based on three different adhesives: Opti Bond All in One (OBAO), Clearfil S3 Bond (CSB), Bond Force (BF). Each main group divided into three subgroups regarding the air-drying time: without application of air stream...
Interactive exploration of large-scale time-varying data using dynamic tracking graphs
Widanagamaachchi, W.
2012-10-01
Exploring and analyzing the temporal evolution of features in large-scale time-varying datasets is a common problem in many areas of science and engineering. One natural representation of such data is tracking graphs, i.e., constrained graph layouts that use one spatial dimension to indicate time and show the "tracks" of each feature as it evolves, merges or disappears. However, for practical data sets creating the corresponding optimal graph layouts that minimize the number of intersections can take hours to compute with existing techniques. Furthermore, the resulting graphs are often unmanageably large and complex even with an ideal layout. Finally, due to the cost of the layout, changing the feature definition, e.g. by changing an iso-value, or analyzing properly adjusted sub-graphs is infeasible. To address these challenges, this paper presents a new framework that couples hierarchical feature definitions with progressive graph layout algorithms to provide an interactive exploration of dynamically constructed tracking graphs. Our system enables users to change feature definitions on-the-fly and filter features using arbitrary attributes while providing an interactive view of the resulting tracking graphs. Furthermore, the graph display is integrated into a linked view system that provides a traditional 3D view of the current set of features and allows a cross-linked selection to enable a fully flexible spatio-temporal exploration of data. We demonstrate the utility of our approach with several large-scale scientific simulations from combustion science. © 2012 IEEE.
A method for real-time memory efficient implementation of blob detection in large images
Directory of Open Access Journals (Sweden)
Petrović Vladimir L.
2017-01-01
Full Text Available In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI. We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.
Large-time asymptotic behaviour of solutions of non-linear Sobolev-type equations
International Nuclear Information System (INIS)
Kaikina, Elena I; Naumkin, Pavel I; Shishmarev, Il'ya A
2009-01-01
The large-time asymptotic behaviour of solutions of the Cauchy problem is investigated for a non-linear Sobolev-type equation with dissipation. For small initial data the approach taken is based on a detailed analysis of the Green's function of the linear problem and the use of the contraction mapping method. The case of large initial data is also closely considered. In the supercritical case the asymptotic formulae are quasi-linear. The asymptotic behaviour of solutions of a non-linear Sobolev-type equation with a critical non-linearity of the non-convective kind differs by a logarithmic correction term from the behaviour of solutions of the corresponding linear equation. For a critical convective non-linearity, as well as for a subcritical non-convective non-linearity it is proved that the leading term of the asymptotic expression for large times is a self-similar solution. For Sobolev equations with convective non-linearity the asymptotic behaviour of solutions in the subcritical case is the product of a rarefaction wave and a shock wave. Bibliography: 84 titles.
Time Domain View of Liquid-like Screening and Large Polaron Formation in Lead Halide Perovskites
Joshi, Prakriti Pradhan; Miyata, Kiyoshi; Trinh, M. Tuan; Zhu, Xiaoyang
The structural softness and dynamic disorder of lead halide perovskites contributes to their remarkable optoelectronic properties through efficient charge screening and large polaron formation. Here we provide a direct time-domain view of the liquid-like structural dynamics and polaron formation in single crystal CH3NH3PbBr3 and CsPbBr3 using femtosecond optical Kerr effect spectroscopy in conjunction with transient reflectance spectroscopy. We investigate structural dynamics as function of pump energy, which enables us to examine the dynamics in the absence and presence of charge carriers. In the absence of charge carriers, structural dynamics are dominated by over-damped picosecond motions of the inorganic PbBr3- sub-lattice and these motions are strongly coupled to band-gap electronic transitions. Carrier injection from across-gap optical excitation triggers additional 0.26 ps dynamics in CH3NH3PbBr3 that can be attributed to the formation of large polarons. In comparison, large polaron formation is slower in CsPbBr3 with a time constant of 0.6 ps. We discuss how such dynamic screening protects charge carriers in lead halide perovskites. US Department of Energy, Office of Science - Basic Energy Sciences.
Directory of Open Access Journals (Sweden)
Paula M Frew
2010-09-01
Full Text Available Paula M Frew1,2,3,4, Mark J Mulligan1,2,3, Su-I Hou5, Kayshin Chan3, Carlos del Rio1,2,3,61Department of Medicine, Division of Infectious Diseases, Emory University School of Medicine, Atlanta, Georgia, USA; 2Emory Center for AIDS Research, Atlanta, Georgia, USA; 3The Hope Clinic of the Emory Vaccine Center, Decatur, Georgia, USA; 4Department of Behavioral Sciences and Health Education, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA; 5Department of Health Promotion and Behavior, College of Public Health, University of Georgia, Athens, Georgia, USA; 6Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, Georgia, USAObjective: This study examines whether men-who-have-sex-with-men (MSM and transgender (TG persons’ attitudes, beliefs, and risk perceptions toward human immunodeficiency virus (HIV vaccine research have been altered as a result of the negative findings from a phase 2B HIV vaccine study.Design: We conducted a cross-sectional survey among MSM and TG persons (N = 176 recruited from community settings in Atlanta from 2007 to 2008. The first group was recruited during an active phase 2B HIV vaccine trial in which a candidate vaccine was being evaluated (the “Step Study”, and the second group was recruited after product futility was widely reported in the media.Methods: Descriptive statistics, t tests, and chi-square tests were conducted to ascertain differences between the groups, and ordinal logistic regressions examined the influences of the above-mentioned factors on a critical outcome, future HIV vaccine study participation. The ordinal regression outcomes evaluated the influences on disinclination, neutrality, and inclination to study participation.Results: Behavioral outcomes such as future recruitment, event attendance, study promotion, and community mobilization did not reveal any differences in participants’ intentions between the groups. However, we observed
Mean time for the development of large workloads and large queue lengths in the GI/G/1 queue
Directory of Open Access Journals (Sweden)
Charles Knessl
1996-01-01
Full Text Available We consider the GI/G/1 queue described by either the workload U(t (unfinished work or the number of customers N(t in the system. We compute the mean time until U(t reaches excess of the level K, and also the mean time until N(t reaches N0. For the M/G/1 and GI/M/1 models, we obtain exact contour integral representations for these mean first passage times. We then compute the mean times asymptotically, as K and N0→∞, by evaluating these contour integrals. For the general GI/G/1 model, we obtain asymptotic results by a singular perturbation analysis of the appropriate backward Kolmogorov equation(s. Numerical comparisons show that the asymptotic formulas are very accurate even for moderate values of K and N0.
Time domain calculation of connector loads of a very large floating structure
Gu, Jiayang; Wu, Jie; Qi, Enrong; Guan, Yifeng; Yuan, Yubo
2015-06-01
Loads generated after an air crash, ship collision, and other accidents may destroy very large floating structures (VLFSs) and create additional connector loads. In this study, the combined effects of ship collision and wave loads are considered to establish motion differential equations for a multi-body VLFS. A time domain calculation method is proposed to calculate the connector load of the VLFS in waves. The Longuet-Higgins model is employed to simulate the stochastic wave load. Fluid force and hydrodynamic coefficient are obtained with DNV Sesam software. The motion differential equation is calculated by applying the time domain method when the frequency domain hydrodynamic coefficient is converted into the memory function of the motion differential equation of the time domain. As a result of the combined action of wave and impact loads, high-frequency oscillation is observed in the time history curve of the connector load. At wave directions of 0° and 75°, the regularities of the time history curves of the connector loads in different directions are similar and the connector loads of C1 and C2 in the X direction are the largest. The oscillation load is observed in the connector in the Y direction at a wave direction of 75° and not at 0°. This paper presents a time domain calculation method of connector load to provide a certain reference function for the future development of Chinese VLFS
Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme
Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha
2017-01-01
Summary Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government’s Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit. PMID:28260842
Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme.
Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha
2016-06-01
Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government's Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit.
Very Large Inflammatory Odontogenic Cyst with Origin on a Single Long Time Traumatized Lower Incisor
Freitas, Filipe; Andre, Saudade; Moreira, Andre; Carames, Joao
2015-01-01
One of the consequences of traumatic injuries is the chance of aseptic pulp necrosis to occur which in time may became infected and give origin to periapical pathosis. Although the apical granulomas and cysts are a common condition, there appearance as an extremely large radiolucent image is a rare finding. Differential diagnosis with other radiographic-like pathologies, such as keratocystic odontogenic tumour or unicystic ameloblastoma, is mandatory. The purpose of this paper is to report a very large radicular cyst caused by a single mandibular incisor traumatized long back, in a 60-year-old male. Medical and clinical histories were obtained, radiographic and cone beam CT examinations performed and an initial incisional biopsy was done. The final decision was to perform a surgical enucleation of a lesion, 51.4 mm in length. The enucleated tissue biopsy analysis was able to render the diagnosis as an inflammatory odontogenic cyst. A 2 year follow-up showed complete bone recovery. PMID:26393219
Elliott, Mark A; du Bois, Naomi
2017-01-01
From the point of view of the cognitive dynamicist the organization of brain circuitry into assemblies defined by their synchrony at particular (and precise) oscillation frequencies is important for the correct correlation of all independent cortical responses to the different aspects of a given complex thought or object. From the point of view of anyone operating complex mechanical systems, i.e., those comprising independent components that are required to interact precisely in time, it follows that the precise timing of such a system is essential - not only essential but measurable, and scalable. It must also be reliable over observations to bring about consistent behavior, whatever that behavior is. The catastrophic consequence of an absence of such precision, for instance that required to govern the interference engine in many automobiles, is indicative of how important timing is for the function of dynamical systems at all levels of operation. The dynamics and temporal considerations combined indicate that it is necessary to consider the operating characteristic of any dynamical, cognitive brain system in terms, superficially at least, of oscillation frequencies. These may, themselves, be forensic of an underlying time-related taxonomy. Currently there are only two sets of relevant and necessarily systematic observations in this field: one of these reports the precise dynamical structure of the perceptual systems engaged in dynamical binding across form and time; the second, derived both empirically from perceptual performance data, as well as obtained from theoretical models, demonstrates a timing taxonomy related to a fundamental operator referred to as the time quantum. In this contribution both sets of theory and observations are reviewed and compared for their predictive consistency. Conclusions about direct comparability are discussed for both theories of cognitive dynamics and time quantum models. Finally, a brief review of some experimental data
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
Large deviation estimates for exceedance times of perpetuity sequences and their dual processes
DEFF Research Database (Denmark)
Buraczewski, Dariusz; Collamore, Jeffrey F.; Damek, Ewa
2016-01-01
In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail dist......-time exceedance probabilities of $\\{ M_n^\\ast \\}$, yielding a new result concerning the convergence of $\\{ M_n^\\ast \\}$ to its stationary distribution.......In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail...... distribution of $\\{ Y_n \\}$ have been developed in the seminal papers of Kesten (1973) and Goldie (1991). Specifically, it is well-known that if $M := \\sup_n Y_n$, then ${\\mathbb P} \\left\\{ M > u \\right\\} \\sim {\\cal C}_M u^{-\\xi}$ as $u \\to \\infty$. While much attention has been focused on extending...
Solution of large nonlinear time-dependent problems using reduced coordinates
International Nuclear Information System (INIS)
Mish, K.D.
1987-01-01
This research is concerned with the idea of reducing a large time-dependent problem, such as one obtained from a finite-element discretization, down to a more manageable size while preserving the most-important physical behavior of the solution. This reduction process is motivated by the concept of a projection operator on a Hilbert Space, and leads to the Lanczos Algorithm for generation of approximate eigenvectors of a large symmetric matrix. The Lanczos Algorithm is then used to develop a reduced form of the spatial component of a time-dependent problem. The solution of the remaining temporal part of the problem is considered from the standpoint of numerical-integration schemes in the time domain. All of these theoretical results are combined to motivate the proposed reduced coordinate algorithm. This algorithm is then developed, discussed, and compared to related methods from the mechanics literature. The proposed reduced coordinate method is then applied to the solution of some representative problems in mechanics. The results of these problems are discussed, conclusions are drawn, and suggestions are made for related future research
Directory of Open Access Journals (Sweden)
Gaëlle Aeby
2014-06-01
Full Text Available Divorce and remarriage usually imply a redefinition of family boundaries, with consequences for the production and availability of social capital. This research shows that bonding and bridging social capitals are differentially made available by families. It first hypothesizes that bridging social capital is more likely to be developed in stepfamilies, and bonding social capital in first-time families. Second, the boundaries of family configurations are expected to vary within stepfamilies and within first-time families creating a diversity of family configurations within both structures. Third, in both cases, social capital is expected to depend on the ways in which their family boundaries are set up by individuals by including or excluding ex-partners, new partner's children, siblings, and other family ties. The study is based on a sample of 300 female respondents who have at least one child of their own between 5 and 13 years, 150 from a stepfamily structure and 150 from a first-time family structure. Social capital is empirically operationalized as perceived emotional support in family networks. The results show that individuals in first-time families more often develop bonding social capital and individuals in stepfamilies bridging social capital. In both cases, however, individuals in family configurations based on close blood and conjugal ties more frequently develop bonding social capital, whereas individuals in family configurations based on in-law, stepfamily or friendship ties are more likely to develop bridging social capital.
Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.
2010-12-01
Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground
Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S; Chakrabarti, A
2017-01-01
Standardization of Aspergillus polymerase chain reaction (PCR) poses two technical challenges (a) standardization of DNA extraction, (b) optimization of PCR against various medically important Aspergillus species. Many cases of aspergillosis go undiagnosed because of relative insensitivity of conventional diagnostic methods such as microscopy, culture or antigen detection. The present study is an attempt to standardize real-time PCR assay for rapid sensitive and specific detection of Aspergillus DNA in EDTA whole blood. Three nucleic acid extraction protocols were compared and a two-step real-time PCR assay was developed and validated following the recommendations of the European Aspergillus PCR Initiative in our setup. In the first PCR step (pan-Aspergillus PCR), the target was 28S rDNA gene, whereas in the second step, species specific PCR the targets were beta-tubulin (for Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus), gene and calmodulin gene (for Aspergillus niger). Species specific identification of four medically important Aspergillus species, namely, A. fumigatus, A. flavus, A. niger and A. terreus were achieved by this PCR. Specificity of the PCR was tested against 34 different DNA source including bacteria, virus, yeast, other Aspergillus sp., other fungal species and for human DNA and had no false-positive reactions. The analytical sensitivity of the PCR was found to be 102 CFU/ml. The present protocol of two-step real-time PCR assays for genus- and species-specific identification for commonly isolated species in whole blood for diagnosis of invasive Aspergillus infections offers a rapid, sensitive and specific assay option and requires clinical validation at multiple centers.
Large Time Behavior for Weak Solutions of the 3D Globally Modified Navier-Stokes Equations
Directory of Open Access Journals (Sweden)
Junbai Ren
2014-01-01
Full Text Available This paper is concerned with the large time behavior of the weak solutions for three-dimensional globally modified Navier-Stokes equations. With the aid of energy methods and auxiliary decay estimates together with Lp-Lq estimates of heat semigroup, we derive the optimal upper and lower decay estimates of the weak solutions for the globally modified Navier-Stokes equations as C1(1+t-3/4≤uL2≤C2(1+t-3/4, t>1. The decay rate is optimal since it coincides with that of heat equation.
Mining Outlier Data in Mobile Internet-Based Large Real-Time Databases
Directory of Open Access Journals (Sweden)
Xin Liu
2018-01-01
Full Text Available Mining outlier data guarantees access security and data scheduling of parallel databases and maintains high-performance operation of real-time databases. Traditional mining methods generate abundant interference data with reduced accuracy, efficiency, and stability, causing severe deficiencies. This paper proposes a new mining outlier data method, which is used to analyze real-time data features, obtain magnitude spectra models of outlier data, establish a decisional-tree information chain transmission model for outlier data in mobile Internet, obtain the information flow of internal outlier data in the information chain of a large real-time database, and cluster data. Upon local characteristic time scale parameters of information flow, the phase position features of the outlier data before filtering are obtained; the decision-tree outlier-classification feature-filtering algorithm is adopted to acquire signals for analysis and instant amplitude and to achieve the phase-frequency characteristics of outlier data. Wavelet transform threshold denoising is combined with signal denoising to analyze data offset, to correct formed detection filter model, and to realize outlier data mining. The simulation suggests that the method detects the characteristic outlier data feature response distribution, reduces response time, iteration frequency, and mining error rate, improves mining adaptation and coverage, and shows good mining outcomes.
TORCH: A Large-Area Detector for Precision Time-of-Flight Measurements at LHCb
Harnew, N
2012-01-01
The TORCH (Time Of internally Reflected CHerenkov light) is an innovative high-precision time-of-flight detector which is suitable for large areas, up to tens of square metres, and is being developed for the upgraded LHCb experiment. The TORCH provides a time-of-flight measurement from the imaging of photons emitted in a 1 cm thick quartz radiator, based on the Cherenkov principle. The photons propagate by total internal reflection to the edge of the quartz plane and are then focused onto an array of Micro-Channel Plate (MCP) photon detectors at the periphery of the detector. The goal is to achieve a timing resolution of 15 ps per particle over a flight distance of 10 m. This will allow particle identification in the challenging momentum region up to 20 GeV/c. Commercial MCPs have been tested in the laboratory and demonstrate the required timing precision. An electronics readout system based on the NINO and HPTDC chipset is being developed to evaluate an 8×8 channel TORCH prototype. The simulated performance...
Event processing time prediction at the CMS experiment of the Large Hadron Collider
International Nuclear Information System (INIS)
Cury, Samir; Gutsche, Oliver; Kcira, Dorian
2014-01-01
The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.
Microsoft Office professional 2010 step by step
Cox, Joyce; Frye, Curtis
2011-01-01
Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom
Real-time graphic display system for ROSA-V Large Scale Test Facility
International Nuclear Information System (INIS)
Kondo, Masaya; Anoda, Yoshinari; Osaki, Hideki; Kukita, Yutaka; Takigawa, Yoshio.
1993-11-01
A real-time graphic display system was developed for the ROSA-V Large Scale Test Facility (LSTF) experiments simulating accident management measures for prevention of severe core damage in pressurized water reactors (PWRs). The system works on an IBM workstation (Power Station RS/6000 model 560) and accommodates 512 channels out of about 2500 total measurements in the LSTF. It has three major functions: (a) displaying the coolant inventory distribution in the facility primary and secondary systems; (b) displaying the measured quantities at desired locations in the facility; and (c) displaying the time histories of measured quantities. The coolant inventory distribution is derived from differential pressure measurements along vertical sections and gamma-ray densitometer measurements for horizontal legs. The color display indicates liquid subcooling calculated from pressure and temperature at individual locations. (author)
Real-world-time simulation of memory consolidation in a large-scale cerebellar model
Directory of Open Access Journals (Sweden)
Masato eGosui
2016-03-01
Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.
International Nuclear Information System (INIS)
Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper
Time-Efficient Cloning Attacks Identification in Large-Scale RFID Systems
Directory of Open Access Journals (Sweden)
Ju-min Zhao
2017-01-01
Full Text Available Radio Frequency Identification (RFID is an emerging technology for electronic labeling of objects for the purpose of automatically identifying, categorizing, locating, and tracking the objects. But in their current form RFID systems are susceptible to cloning attacks that seriously threaten RFID applications but are hard to prevent. Existing protocols aimed at detecting whether there are cloning attacks in single-reader RFID systems. In this paper, we investigate the cloning attacks identification in the multireader scenario and first propose a time-efficient protocol, called the time-efficient Cloning Attacks Identification Protocol (CAIP to identify all cloned tags in multireaders RFID systems. We evaluate the performance of CAIP through extensive simulations. The results show that CAIP can identify all the cloned tags in large-scale RFID systems fairly fast with required accuracy.
Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model
Directory of Open Access Journals (Sweden)
Xin Wang
2012-01-01
Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.
Ghosh, Soumen; Andersen, Amity; Gagliardi, Laura; Cramer, Christopher J; Govind, Niranjan
2017-09-12
We present an implementation of a time-dependent semiempirical method (INDO/S) in NWChem using real-time (RT) propagation to address, in principle, the entire spectrum of valence electronic excitations. Adopting this model, we study the UV/vis spectra of medium-sized systems such as P3B2 and f-coronene, and in addition much larger systems such as ubiquitin in the gas phase and the betanin chromophore in the presence of two explicit solvents (water and methanol). RT-INDO/S provides qualitatively and often quantitatively accurate results when compared with RT- TDDFT or experimental spectra. Even though we only consider the INDO/S Hamiltonian in this work, our implementation provides a framework for performing electron dynamics in large systems using semiempirical Hartree-Fock Hamiltonians in general.
Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P
2017-08-01
Patients' willingness to travel farther distances for certain imaging services may reflect their perceptions of the degree of differentiation of such services. We compare patients' travel times for a range of imaging examinations performed across a large academic health system. We searched the NYU Langone Medical Center Enterprise Data Warehouse to identify 442,990 adult outpatient imaging examinations performed over a recent 3.5-year period. Geocoding software was used to estimate typical driving times from patients' residences to imaging facilities. Variation in travel times was assessed among examination types. The mean expected travel time was 29.2 ± 20.6 minutes, but this varied significantly (p travel times were shortest for ultrasound (26.8 ± 18.9) and longest for positron emission tomography-computed tomography (31.9 ± 21.5). For magnetic resonance imaging, travel times were shortest for musculoskeletal extremity (26.4 ± 19.2) and spine (28.6 ± 21.0) examinations and longest for prostate (35.9 ± 25.6) and breast (32.4 ± 22.3) examinations. For computed tomography, travel times were shortest for a range of screening examinations [colonography (25.5 ± 20.8), coronary artery calcium scoring (26.1 ± 19.2), and lung cancer screening (26.4 ± 14.9)] and longest for angiography (32.0 ± 22.6). For ultrasound, travel times were shortest for aortic aneurysm screening (22.3 ± 18.4) and longest for breast (30.1 ± 19.2) examinations. Overall, men (29.9 ± 21.6) had longer (p travel times than women (27.8 ± 20.3); this difference persisted for each modality individually (p ≤ 0.006). Patients' willingness to travel longer times for certain imaging examination types (particularly breast and prostate imaging) supports the role of specialized services in combating potential commoditization of imaging services. Disparities in travel times by gender warrant further investigation. Copyright
A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.
Halloran, John T; Rocke, David M
2018-05-04
Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .
Direct Analysis in Real Time Mass Spectrometry for Characterization of Large Saccharides.
Ma, Huiying; Jiang, Qing; Dai, Diya; Li, Hongli; Bi, Wentao; Da Yong Chen, David
2018-03-06
Polysaccharide characterization posts the most difficult challenge to available analytical technologies compared to other types of biomolecules. Plant polysaccharides are reported to have numerous medicinal values, but their effect can be different based on the types of plants, and even regions of productions and conditions of cultivation. However, the molecular basis of the differences of these polysaccharides is largely unknown. In this study, direct analysis in real time mass spectrometry (DART-MS) was used to generate polysaccharide fingerprints. Large saccharides can break down into characteristic small fragments in the DART source via pyrolysis, and the products are then detected by high resolution MS. Temperature was shown to be a crucial parameter for the decomposition of large polysaccharide. The general behavior of carbohydrates in DART-MS was also studied through the investigation of a number of mono- and oligosaccharide standards. The chemical formula and putative ionic forms of the fragments were proposed based on accurate mass with less than 10 ppm mass errors. Multivariate data analysis shows the clear differentiation of different plant species. Intensities of marker ions compared among samples also showed obvious differences. The combination of DART-MS analysis and mechanochemical extraction method used in this work demonstrates a simple, fast, and high throughput analytical protocol for the efficient evaluation of molecular features in plant polysaccharides.
International Nuclear Information System (INIS)
Hayden, C.C.; Chandler, D.W.
1995-01-01
Results are presented from femtosecond time-resolved coherent Raman experiments in which we excite and monitor vibrational coherence in gas-phase samples of benzene and 1,3,5-hexatriene. Different physical mechanisms for coherence decay are seen in these two molecules. In benzene, where the Raman polarizability is largely isotropic, the Q branch of the vibrational Raman spectrum is the primary feature excited. Molecules in different rotational states have different Q-branch transition frequencies due to vibration--rotation interaction. Thus, the macroscopic polarization that is observed in these experiments decays because it has many frequency components from molecules in different rotational states, and these frequency components go out of phase with each other. In 1,3,5-hexatriene, the Raman excitation produces molecules in a coherent superposition of rotational states, through (O, P, R, and S branch) transitions that are strong due to the large anisotropy of the Raman polarizability. The coherent superposition of rotational states corresponds to initially spatially oriented, vibrationally excited, molecules that are freely rotating. The rotation of molecules away from the initial orientation is primarily responsible for the coherence decay in this case. These experiments produce large (∼10% efficiency) Raman shifted signals with modest excitation pulse energies (10 μJ) demonstrating the feasibility of this approach for a variety of gas phase studies. copyright 1995 American Institute of Physics
Time to "go large" on biofilm research: advantages of an omics approach.
Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J
2009-04-01
In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.
Rouwet, Dmitri
2016-04-01
Tracking variations in the chemical composition, water temperature and pH of brines from peak-activity crater lakes is the most obvious way to forecast phreatic activity. Volcano monitoring intrinsically implies a time window of observation that should be synchronised with the kinetics of magmatic processes, such as degassing and magma intrusion. To decipher "how much time ago" a variation in degassing regime actually occurred before eventually being detected in a crater lake is key, and depends on the lake water residence time. The above reasoning assumes that gas is preserved as anions in the lake water (SO4, Cl, F anions), in other words, that scrubbing of acid gases is complete and irreversible. Less is true. Recent work has confirmed, by direct MultiGas measurement from evaporative plumes, that even the strongest acid in liquid medium (i.e. SO2) degasses from hyper-acidic crater lakes. The less strong acid HCl has long been recognised as being more volatile than hydrophyle in extremely acidic solutions (pH near 0), through a long-term steady increase in SO4/Cl ratios in the vigorously evaporating crater lake of Poás volcano. We now know that acidic gases flush through hyper-acidic crater lake brines, but we don't know to which extend (completely or partially?), and with which speed. The chemical composition hence only reflects a transient phase of the gas flushing through the lake. In terms of volcanic surveillance this brings the advantage that the monitoring time window is definitely shorter than defined by the water chemistry, but yet, we do not know how much shorter. Empirical experiments by Capaccioni et al. (in press) have tried to tackle this kinetic problem for HCl degassing from a "lab-lake" on the short-term (2 days). With this state of the art in mind, two new monitoring strategies can be proposed to seek for precursory signals of phreatic eruptions from crater lakes: (1) Tracking variations in gas compositions, fluxes and ratios between species in
Getty, Stephanie A.; Brinckerhoff, William B.; Cornish, Timothy; Li, Xiang; Floyd, Melissa; Arevalo, Ricardo Jr.; Cook, Jamie Elsila; Callahan, Michael P.
2013-01-01
Laser desorption/ionization time-of-flight mass spectrometry (LD-TOF-MS) holds promise to be a low-mass, compact in situ analytical capability for future landed missions to planetary surfaces. The ability to analyze a solid sample for both mineralogical and preserved organic content with laser ionization could be compelling as part of a scientific mission pay-load that must be prepared for unanticipated discoveries. Targeted missions for this instrument capability include Mars, Europa, Enceladus, and small icy bodies, such as asteroids and comets.
A study of residence time distribution using radiotracer technique in the large scale plant facility
Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.
2017-06-01
As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.
Research on resistance characteristics of YBCO tape under short-time DC large current impact
Zhang, Zhifeng; Yang, Jiabin; Qiu, Qingquan; Zhang, Guomin; Lin, Liangzhen
2017-06-01
Research of the resistance characteristics of YBCO tape under short-time DC large current impact is the foundation of the developing DC superconducting fault current limiter (SFCL) for voltage source converter-based high voltage direct current system (VSC-HVDC), which is one of the valid approaches to solve the problems of renewable energy integration. SFCL can limit DC short-circuit and enhance the interrupting capabilities of DC circuit breakers. In this paper, under short-time DC large current impacts, the resistance features of naked tape of YBCO tape are studied to find the resistance - temperature change rule and the maximum impact current. The influence of insulation for the resistance - temperature characteristics of YBCO tape is studied by comparison tests with naked tape and insulating tape in 77 K. The influence of operating temperature on the tape is also studied under subcooled liquid nitrogen condition. For the current impact security of YBCO tape, the critical current degradation and top temperature are analyzed and worked as judgment standards. The testing results is helpful for in developing SFCL in VSC-HVDC.
International Nuclear Information System (INIS)
Busolo, F.; Conventi, L.; Grigolon, M.; Palu, G.
1991-01-01
Kinetics of [3H]-uridine uptake by murine peritoneal macrophages (pM phi) is early altered after exposure to a variety of stimuli. Alterations caused by Candida albicans, lipopolysaccharide (LPS) and recombinant interferon-gamma (rIFN-gamma) were similar in SAVO, C57BL/6, C3H/HeN and C3H/HeJ mice, and were not correlated with an activation process as shown by the amount of tumor necrosis factor-alpha (TNF-alpha) being released. Short-time exposure to all stimuli resulted in an increased nucleoside uptake by SAVO pM phi, suggesting that the tumoricidal function of this cell either depends from the type of stimulus or the time when the specific interaction with the cell receptor is taking place. Experiments with priming and triggering signals confirmed the above findings, indicating that the increase or the decrease of nucleoside uptake into the cell depends essentially on the chemical nature of the priming stimulus. The triggering stimulus, on the other hand, is only able to amplify the primary response
Piloted simulator study of allowable time delays in large-airplane response
Grantham, William D.; Bert T.?aetingas, Stephen A.dings with ran; Bert T.?aetingas, Stephen A.dings with ran
1987-01-01
A piloted simulation was performed to determine the permissible time delay and phase shift in the flight control system of a specific large transport-type airplane. The study was conducted with a six degree of freedom ground-based simulator and a math model similar to an advanced wide-body jet transport. Time delays in discrete and lagged form were incorporated into the longitudinal, lateral, and directional control systems of the airplane. Three experienced pilots flew simulated approaches and landings with random localizer and glide slope offsets during instrument tracking as their principal evaluation task. Results of the present study suggest a level 1 (satisfactory) handling qualities limit for the effective time delay of 0.15 sec in both the pitch and roll axes, as opposed to a 0.10-sec limit of the present specification (MIL-F-8785C) for both axes. Also, the present results suggest a level 2 (acceptable but unsatisfactory) handling qualities limit for an effective time delay of 0.82 sec and 0.57 sec for the pitch and roll axes, respectively, as opposed to 0.20 sec of the present specifications for both axes. In the area of phase shift between cockpit input and control surface deflection,the results of this study, flown in turbulent air, suggest less severe phase shift limitations for the approach and landing task-approximately 50 deg. in pitch and 40 deg. in roll - as opposed to 15 deg. of the present specifications for both axes.
Ogawa, K.; Isobe, M.; Nishitani, T.; Murakami, S.; Seki, R.; Nakata, M.; Takada, E.; Kawase, H.; Pu, N.; LHD Experiment Group
2018-03-01
Time-resolved measurement of triton burnup is performed with a scintillating fiber detector system in the deuterium operation of the large helical device. The scintillating fiber detector system is composed of the detector head consisting of 109 scintillating fibers having a diameter of 1 mm and a length of 100 mm embedded in the aluminum substrate, the magnetic registrant photomultiplier tube, and the data acquisition system equipped with 1 GHz sampling rate analogies to digital converter and the field programmable gate array. The discrimination level of 150 mV was set to extract the pulse signal induced by 14 MeV neutrons according to the pulse height spectra obtained in the experiment. The decay time of 14 MeV neutron emission rate after neutral beam is turned off measured by the scintillating fiber detector. The decay time is consistent with the decay time of total neutron emission rate corresponding to the 14 MeV neutrons measured by the neutron flux monitor as expected. Evaluation of the diffusion coefficient is conducted using a simple classical slowing-down model FBURN code. It is found that the diffusion coefficient of triton is evaluated to be less than 0.2 m2 s-1.
Rapid Large Earthquake and Run-up Characterization in Quasi Real Time
Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.
2017-12-01
Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.
Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?
Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.
2018-02-01
The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and
International Nuclear Information System (INIS)
Imam-Dahroni; Dwi-Herwidhi; NS, Kasilani
2000-01-01
The research of the synthesis of matrix graphite on the step of bakingprocess was conducted, by focusing on the influence of time and velocityvariables of the inert gas. The investigation on baking times ranging from 5minutes to 55 minutes and by varying the velocity of inert gas from 0.30l/minute to 3.60 l/minute, resulted the product of different matrix.Optimizing at the time of operation and the flow rate of argon gas indicatedthat the baking time for 30 minutes and by the flow rate of argon gas of 2.60l/minute resulted best matrix graphite that has a hardness value of 11kg/mm 2 of hardness and the ductility of 1800 Newton. (author)
Ficanha, Evandro M; Ribeiro, Guilherme A; Knop, Lauren; Rastgaar, Mo
2017-07-01
This paper describes the methods and experiment protocols for estimation of the human ankle impedance during turning and straight line walking. The ankle impedance of two human subjects during the stance phase of walking in both dorsiflexion plantarflexion (DP) and inversion eversion (IE) were estimated. The impedance was estimated about 8 axes of rotations of the human ankle combining different amounts of DP and IE rotations, and differentiating among positive and negative rotations at 5 instants of the stance length (SL). Specifically, the impedance was estimated at 10%, 30%, 50%, 70% and 90% of the SL. The ankle impedance showed great variability across time, and across the axes of rotation, with consistent larger stiffness and damping in DP than IE. When comparing straight walking and turning, the main differences were in damping at 50%, 70%, and 90% of the SL with an increase in damping at all axes of rotation during turning.
Hypoattenuation on CTA images with large vessel occlusion: timing affects conspicuity
Energy Technology Data Exchange (ETDEWEB)
Dave, Prasham [University of Ottawa, MD Program, Faculty of Medicine, Ottawa, ON (Canada); Lum, Cheemun; Thornhill, Rebecca; Chakraborty, Santanu [University of Ottawa, Department of Radiology, Ottawa, ON (Canada); Ottawa Hospital Research Institute, Ottawa, ON (Canada); Dowlatshahi, Dar [Ottawa Hospital Research Institute, Ottawa, ON (Canada); University of Ottawa, Division of Neurology, Department of Medicine, Ottawa, ON (Canada)
2017-05-15
Parenchymal hypoattenuation distal to occlusions on CTA source images (CTASI) is perceived because of the differences in tissue contrast compared to normally perfused tissue. This difference in conspicuity can be measured objectively. We evaluated the effect of contrast timing on the conspicuity of ischemic areas. We collected consecutive patients, retrospectively, between 2012 and 2014 with large vessel occlusions that had dynamic multiphase CT angiography (CTA) and CT perfusion (CTP). We identified areas of low cerebral blood volume on CTP maps and drew the region of interest (ROI) on the corresponding CTASI. A second ROI was placed in an area of normally perfused tissue. We evaluated conspicuity by comparing the absolute and relative change in attenuation between ischemic and normally perfused tissue over seven time points. The median absolute and relative conspicuity was greatest at the peak arterial (8.6 HU (IQR 5.1-13.9); 1.15 (1.09-1.26)), notch (9.4 HU (5.8-14.9); 1.17 (1.10-1.27)), and peak venous phases (7.0 HU (3.1-12.7); 1.13 (1.05-1.23)) compared to other portions of the time-attenuation curve (TAC). There was a significant effect of phase on the TAC for the conspicuity of ischemic vs normally perfused areas (P < 0.00001). The conspicuity of ischemic areas distal to a large artery occlusion in acute stroke is dependent on the phase of contrast arrival with dynamic CTASI and is objectively greatest in the mid-phase of the TAC. (orig.)
Backward-in-time methods to simulate large-scale transport and mixing in the ocean
Prants, S. V.
2015-06-01
In oceanography and meteorology, it is important to know not only where water or air masses are headed for, but also where they came from as well. For example, it is important to find unknown sources of oil spills in the ocean and of dangerous substance plumes in the atmosphere. It is impossible with the help of conventional ocean and atmospheric numerical circulation models to extrapolate backward from the observed plumes to find the source because those models cannot be reversed in time. We review here recently elaborated backward-in-time numerical methods to identify and study mesoscale eddies in the ocean and to compute where those waters came from to a given area. The area under study is populated with a large number of artificial tracers that are advected backward in time in a given velocity field that is supposed to be known analytically or numerically, or from satellite and radar measurements. After integrating advection equations, one gets positions of each tracer on a fixed day in the past and can identify from known destinations a particle positions at earlier times. The results provided show that the method is efficient, for example, in estimating probabilities to find increased concentrations of radionuclides and other pollutants in oceanic mesoscale eddies. The backward-in-time methods are illustrated in this paper with a few examples. Backward-in-time Lagrangian maps are applied to identify eddies in satellite-derived and numerically generated velocity fields and to document the pathways by which they exchange water with their surroundings. Backward-in-time trapping maps are used to identify mesoscale eddies in the altimetric velocity field with a risk to be contaminated by Fukushima-derived radionuclides. The results of simulations are compared with in situ mesurement of caesium concentration in sea water samples collected in a recent research vessel cruise in the area to the east of Japan. Backward-in-time latitudinal maps and the corresponding
Wierenga, Debbie; Engbers, Luuk H; van Empelen, Pepijn; Hildebrandt, Vincent H; van Mechelen, Willem
2012-08-07
Worksite health promotion programs (WHPPs) offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions.This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital) will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews) and quantitative methods (i.e. process evaluation questionnaires) applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and after 6, 12 and 18 months. This is one of the few
Directory of Open Access Journals (Sweden)
Wierenga Debbie
2012-08-01
Full Text Available Abstract Background Worksite health promotion programs (WHPPs offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions. This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. Methods and design This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews and quantitative methods (i.e. process evaluation questionnaires applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
Directory of Open Access Journals (Sweden)
Shukui Liu
2011-03-01
Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
Real-Time Track Reallocation for Emergency Incidents at Large Railway Stations
Directory of Open Access Journals (Sweden)
Wei Liu
2015-01-01
Full Text Available After track capacity breakdowns at a railway station, train dispatchers need to generate appropriate track reallocation plans to recover the impacted train schedule and minimize the expected total train delay time under stochastic scenarios. This paper focuses on the real-time track reallocation problem when tracks break down at large railway stations. To represent these cases, virtual trains are introduced and activated to occupy the accident tracks. A mathematical programming model is developed, which aims at minimizing the total occupation time of station bottleneck sections to avoid train delays. In addition, a hybrid algorithm between the genetic algorithm and the simulated annealing algorithm is designed. The case study from the Baoji railway station in China verifies the efficiency of the proposed model and the algorithm. Numerical results indicate that, during a daily and shift transport plan from 8:00 to 8:30, if five tracks break down simultaneously, this will disturb train schedules (result in train arrival and departure delays.
Shared control on lunar spacecraft teleoperation rendezvous operations with large time delay
Ya-kun, Zhang; Hai-yang, Li; Rui-xue, Huang; Jiang-hui, Liu
2017-08-01
Teleoperation could be used in space on-orbit serving missions, such as object deorbits, spacecraft approaches, and automatic rendezvous and docking back-up systems. Teleoperation rendezvous and docking in lunar orbit may encounter bottlenecks for the inherent time delay in the communication link and the limited measurement accuracy of sensors. Moreover, human intervention is unsuitable in view of the partial communication coverage problem. To solve these problems, a shared control strategy for teleoperation rendezvous and docking is detailed. The control authority in lunar orbital maneuvers that involves two spacecraft as rendezvous and docking in the final phase was discussed in this paper. The predictive display model based on the relative dynamic equations is established to overcome the influence of the large time delay in communication link. We discuss and attempt to prove via consistent, ground-based simulations the relative merits of fully autonomous control mode (i.e., onboard computer-based), fully manual control (i.e., human-driven at the ground station) and shared control mode. The simulation experiments were conducted on the nine-degrees-of-freedom teleoperation rendezvous and docking simulation platform. Simulation results indicated that the shared control methods can overcome the influence of time delay effects. In addition, the docking success probability of shared control method was enhanced compared with automatic and manual modes.
Directory of Open Access Journals (Sweden)
Vanessa Suin
2014-01-01
Full Text Available A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR, based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.
Suin, Vanessa; Nazé, Florence; Francart, Aurélie; Lamoral, Sophie; De Craeye, Stéphane; Kalai, Michael; Van Gucht, Steven
2014-01-01
A generic two-step lyssavirus real-time reverse transcriptase polymerase chain reaction (qRT-PCR), based on a nested PCR strategy, was validated for the detection of different lyssavirus species. Primers with 17 to 30% of degenerate bases were used in both consecutive steps. The assay could accurately detect RABV, LBV, MOKV, DUVV, EBLV-1, EBLV-2, and ABLV. In silico sequence alignment showed a functional match with the remaining lyssavirus species. The diagnostic specificity was 100% and the sensitivity proved to be superior to that of the fluorescent antigen test. The limit of detection was ≤ 1 50% tissue culture infectious dose. The related vesicular stomatitis virus was not recognized, confirming the selectivity for lyssaviruses. The assay was applied to follow the evolution of rabies virus infection in the brain of mice from 0 to 10 days after intranasal inoculation. The obtained RNA curve corresponded well with the curves obtained by a one-step monospecific RABV-qRT-PCR, the fluorescent antigen test, and virus titration. Despite the presence of degenerate bases, the assay proved to be highly sensitive, specific, and reproducible.
Norlelawati, A T; Mohd Danial, G; Nora, H; Nadia, O; Zatur Rawihah, K; Nor Zamzila, A; Naznin, M
2016-04-01
Synovial sarcoma (SS) is a rare cancer and accounts for 5-10% of adult soft tissue sarcomas. Making an accurate diagnosis is difficult due to the overlapping histological features of SS with other types of sarcomas and the non-specific immunohistochemistry profile findings. Molecular testing is thus considered necessary to confirm the diagnosis since more than 90% of SS cases carry the transcript of t(X;18)(p11.2;q11.2). The purpose of this study is to diagnose SS at molecular level by testing for t(X;18) fusion-transcript expression through One-step reverse transcriptase real-time Polymerase Chain Reaction (PCR). Formalin-fixed paraffin-embedded tissue blocks of 23 cases of soft tissue sarcomas, which included 5 and 8 cases reported as SS as the primary diagnosis and differential diagnosis respectively, were retrieved from the Department of Pathology, Tengku Ampuan Afzan Hospital, Kuantan, Pahang. RNA was purified from the tissue block sections and then subjected to One-step reverse transcriptase real-time PCR using sequence specific hydrolysis probes for simultaneous detection of either SYT-SSX1 or SYT-SSX2 fusion transcript. Of the 23 cases, 4 cases were found to be positive for SYT-SSX fusion transcript in which 2 were diagnosed as SS whereas in the 2 other cases, SS was the differential diagnosis. Three cases were excluded due to failure of both amplification assays SYT-SSX and control β-2-microglobulin. The remaining 16 cases were negative for the fusion transcript. This study has shown that the application of One-Step reverse transcriptase real time PCR for the detection SYT-SSX transcript is feasible as an aid in confirming the diagnosis of synovial sarcoma.
Tsujimoto, A; Barkmeier, W W; Takamizawa, T; Latta, M A; Miyazaki, M
2016-01-01
The purpose of this study was to evaluate the effect of phosphoric acid pre-etching times on shear bond strength (SBS) and surface free energy (SFE) with single-step self-etch adhesives. The three single-step self-etch adhesives used were: 1) Scotchbond Universal Adhesive (3M ESPE), 2) Clearfil tri-S Bond (Kuraray Noritake Dental), and 3) G-Bond Plus (GC). Two no pre-etching groups, 1) untreated enamel and 2) enamel surfaces after ultrasonic cleaning with distilled water for 30 seconds to remove the smear layer, were prepared. There were four pre-etching groups: 1) enamel surfaces were pre-etched with phosphoric acid (Etchant, 3M ESPE) for 3 seconds, 2) enamel surfaces were pre-etched for 5 seconds, 3) enamel surfaces were pre-etched for 10 seconds, and 4) enamel surfaces were pre-etched for 15 seconds. Resin composite was bonded to the treated enamel surface to determine SBS. The SFEs of treated enamel surfaces were determined by measuring the contact angles of three test liquids. Scanning electron microscopy was used to examine the enamel surfaces and enamel-adhesive interface. The specimens with phosphoric acid pre-etching showed significantly higher SBS and SFEs than the specimens without phosphoric acid pre-etching regardless of the adhesive system used. SBS and SFEs did not increase for phosphoric acid pre-etching times over 3 seconds. There were no significant differences in SBS and SFEs between the specimens with and without a smear layer. The data suggest that phosphoric acid pre-etching of ground enamel improves the bonding performance of single-step self-etch adhesives, but these bonding properties do not increase for phosphoric acid pre-etching times over 3 seconds.
International Nuclear Information System (INIS)
Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki
2016-01-01
The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.
Energy Technology Data Exchange (ETDEWEB)
Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)
2016-11-15
The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.
Directory of Open Access Journals (Sweden)
Min Chen
2014-01-01
Full Text Available We study the one-dimensional bipolar nonisentropic Euler-Poisson equations which can model various physical phenomena, such as the propagation of electron and hole in submicron semiconductor devices, the propagation of positive ion and negative ion in plasmas, and the biological transport of ions for channel proteins. We show the existence and large time behavior of global smooth solutions for the initial value problem, when the difference of two particles’ initial mass is nonzero, and the far field of two particles’ initial temperatures is not the ambient device temperature. This result improves that of Y.-P. Li, for the case that the difference of two particles’ initial mass is zero, and the far field of the initial temperature is the ambient device temperature.
Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation
Directory of Open Access Journals (Sweden)
Shunli Wang
2016-01-01
Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.
Space-Time Convolutional Codes over Finite Fields and Rings for Systems with Large Diversity Order
Directory of Open Access Journals (Sweden)
B. F. Uchôa-Filho
2008-06-01
Full Text Available We propose a convolutional encoder over the finite ring of integers modulo pk,Ã¢Â„Â¤pk, where p is a prime number and k is any positive integer, to generate a space-time convolutional code (STCC. Under this structure, we prove three properties related to the generator matrix of the convolutional code that can be used to simplify the code search procedure for STCCs over Ã¢Â„Â¤pk. Some STCCs of large diversity order (Ã¢Â‰Â¥4 designed under the trace criterion for n=2,3, and 4 transmit antennas are presented for various PSK signal constellations.
Duvall, Thomas L.; Hanasoge, Shravan M.
2012-01-01
With large separations (10-24 deg heliocentric), it has proven possible to cleanly separate the horizontal and vertical components of supergranular flow with time-distance helioseismology. These measurements require very broad filters in the k-$\\omega$ power spectrum as apparently supergranulation scatters waves over a large area of the power spectrum. By picking locations of supergranulation as peaks in the horizontal divergence signal derived from f-mode waves, it is possible to simultaneously obtain average properties of supergranules and a high signal/noise ratio by averaging over many cells. By comparing ray-theory forward modeling with HMI measurements, an average supergranule model with a peak upflow of 240 m/s at cell center at a depth of 2.3 Mm and a peak horizontal outflow of 700 m/s at a depth of 1.6 Mm. This upflow is a factor of 20 larger than the measured photospheric upflow. These results may not be consistent with earlier measurements using much shorter separations (<5 deg heliocentric). With a 30 Mm horizontal extent and a few Mm in depth, the cells might be characterized as thick pancakes.
Chen, Jingfang; Zhang, Rusheng; Ou, Xinhua; Yao, Dong; Huang, Zheng; Li, Linzhi; Sun, Biancheng
2017-06-01
A TaqMan based duplex one-step real time RT-PCR (rRT-PCR) assay was developed for the rapid detection of Coxsackievirus A10 (CV-A10) and other enterovirus (EVs) in clinical samples. The assay was fully evaluated and found to be specific and sensitive. When applied in 115 clinical samples, a 100% diagnostic sensitivity in CV-A10 detection and 97.4% diagnostic sensitivity in other EVs were found. Copyright © 2017 Elsevier Ltd. All rights reserved.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
International Nuclear Information System (INIS)
Goethe, Martin; Rubi, J. Miguel; Fita, Ignacio
2016-01-01
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
Energy Technology Data Exchange (ETDEWEB)
Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel [Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Fita, Ignacio [Institut de Biologia Molecular de Barcelona, Baldiri Reixac 10, 08028 Barcelona (Spain)
2016-03-15
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
International Nuclear Information System (INIS)
Lee, T.V.; Rothstein, D.; Madey, R.
1986-01-01
The time-dependent concentration of a radioactive gas at the outlet of an adsorber bed for a step change in the input concentration is analyzed by the method of moments. This moment analysis yields analytical expressions for calculating the kinetic parameters of a gas adsorbed on a porous solid in terms of observables from a time-dependent transmission curve. Transmission is the ratio of the adsorbate outlet concentration to that at the inlet. The three nonequilibrium parameters are the longitudinal diffusion coefficient, the solid-phase diffusion coefficient, and the interfacial mass-transfer coefficient. Three quantities that can be extracted in principle from an experimental transmission curve are the equilibrium transmission, the average residence (or propagation) time, and the first-moment relative to the propagation time. The propagation time for a radioactive gas is given by the time integral of one minus the transmission (expressed as a fraction of the steady-state transmission). The steady-state transmission, the propagation time, and the first-order moment are functions of the three kinetic parameters and the equilibrium adsorption capacity. The equilibrium adsorption capacity is extracted from an experimental transmission curve for a stable gaseous isotope. The three kinetic parameters can be obtained by solving the three analytical expressions simultaneously. No empirical correlations are required
Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J
2016-03-01
The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1
realfast: Real-time, Commensal Fast Transient Surveys with the Very Large Array
Law, C. J.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Demorest, P.; Halle, A.; Khudikyan, S.; Lazio, T. J. W.; Pokorny, M.; Robnett, J.; Rupen, M. P.
2018-05-01
Radio interferometers have the ability to precisely localize and better characterize the properties of sources. This ability is having a powerful impact on the study of fast radio transients, where a few milliseconds of data is enough to pinpoint a source at cosmological distances. However, recording interferometric data at millisecond cadence produces a terabyte-per-hour data stream that strains networks, computing systems, and archives. This challenge mirrors that of other domains of science, where the science scope is limited by the computational architecture as much as the physical processes at play. Here, we present a solution to this problem in the context of radio transients: realfast, a commensal, fast transient search system at the Jansky Very Large Array. realfast uses a novel architecture to distribute fast-sampled interferometric data to a 32-node, 64-GPU cluster for real-time imaging and transient detection. By detecting transients in situ, we can trigger the recording of data for those rare, brief instants when the event occurs and reduce the recorded data volume by a factor of 1000. This makes it possible to commensally search a data stream that would otherwise be impossible to record. This system will search for millisecond transients in more than 1000 hr of data per year, potentially localizing several Fast Radio Bursts, pulsars, and other sources of impulsive radio emission. We describe the science scope for realfast, the system design, expected outcomes, and ways in which real-time analysis can help in other fields of astrophysics.
Suppression of the Transit -Time Instability in Large-Area Electron Beam Diodes
Myers, Matthew C.; Friedman, Moshe; Swanekamp, Stephen B.; Chan, Lop-Yung; Ludeking, Larry; Sethian, John D.
2002-12-01
Experiment, theory, and simulation have shown that large-area electron-beam diodes are susceptible to the transit-time instability. The instability modulates the electron beam spatially and temporally, producing a wide spread in electron energy and momentum distributions. The result is gross inefficiency in beam generation and propagation. Simulations indicate that a periodic, slotted cathode structure that is loaded with resistive elements may be used to eliminate the instability. Such a cathode has been fielded on one of the two opposing 60 cm × 200 cm diodes on the NIKE KrF laser at the Naval Research Laboratory. These diodes typically deliver 600 kV, 500 kA, 250 ns electron beams to the laser cell in an external magnetic field of 0.2 T. We conclude that the slotted cathode suppressed the transit-time instability such that the RF power was reduced by a factor of 9 and that electron transmission efficiency into the laser gas was improved by more than 50%.
Suppression of the transit-time instability in large-area electron beam diodes
International Nuclear Information System (INIS)
Myers, Matthew C.; Friedman, Moshe; Sethian, John D.; Swanekamp, Stephen B.; Chan, L.-Y.; Ludeking, Larry
2002-01-01
Experiment, theory, and simulation have shown that large-area electron-beam diodes are susceptible to the transit-time instability. The instability modulates the electron beam spatially and temporally, producing a wide spread in electron energy and momentum distributions. The result is gross inefficiency in beam generation and propagation. Simulations indicate that a periodic, slotted cathode structure that is loaded with resistive elements may be used to eliminate the instability. Such a cathode has been fielded on one of the two opposing 60 cm x 200 cm diodes on the NIKE KrF laser at the Naval Research Laboratory. These diodes typically deliver 600 kV, 500 kA, 250 ns electron beams to the laser cell in an external magnetic field of 0.2 T. We conclude that the slotted cathode suppressed the transit-time instability such that the RF power was reduced by a factor of 9 and that electron transmission efficiency into the laser gas was improved by more than 50%
Practical method of calculating time-integrated concentrations at medium and large distances
International Nuclear Information System (INIS)
Cagnetti, P.; Ferrara, V.
1980-01-01
Previous reports have covered the possibility of calculating time-integrated concentrations (TICs) for a prolonged release, based on concentration estimates for a brief release. This study proposes a simple method of evaluating concentrations in the air at medium and large distances, for a brief release. It is known that the stability of the atmospheric layers close to ground level influence diffusion only over short distances. Beyond some tens of kilometers, as the pollutant cloud progressively reaches higher layers, diffusion is affected by factors other than the stability at ground level, such as wind shear for intermediate distances and the divergence and rotational motion of air masses towards the upper limit of the mesoscale and on the synoptic scale. Using the data available in the literature, expressions for sigmasub(y) and sigmasub(z) are proposed for transfer times corresponding to those for up to distances of several thousand kilometres, for two initial diffusion situations (up to distances of 10 - 20 km), those characterized by stable and neutral conditions respectively. Using this method simple hand calculations can be made for any problem relating to the diffusion of radioactive pollutants over long distances
Getty, Stephanie; Brickerhoff, William; Cornish, Timothy; Ecelberger, Scott; Floyd, Melissa
2012-01-01
RATIONALE A miniature time-of-flight mass spectrometer has been adapted to demonstrate two-step laser desorption-ionization (LOI) in a compact instrument package for enhanced organics detection. Two-step LDI decouples the desorption and ionization processes, relative to traditional laser ionization-desorption, in order to produce low-fragmentation conditions for complex organic analytes. Tuning UV ionization laser energy allowed control ofthe degree of fragmentation, which may enable better identification of constituent species. METHODS A reflectron time-of-flight mass spectrometer prototype measuring 20 cm in length was adapted to a two-laser configuration, with IR (1064 nm) desorption followed by UV (266 nm) postionization. A relatively low ion extraction voltage of 5 kV was applied at the sample inlet. Instrument capabilities and performance were demonstrated with analysis of a model polycyclic aromatic hydrocarbon, representing a class of compounds important to the fields of Earth and planetary science. RESULTS L2MS analysis of a model PAH standard, pyrene, has been demonstrated, including parent mass identification and the onset o(tunable fragmentation as a function of ionizing laser energy. Mass resolution m/llm = 380 at full width at half-maximum was achieved which is notable for gas-phase ionization of desorbed neutrals in a highly-compact mass analyzer. CONCLUSIONS Achieving two-step laser mass spectrometry (L2MS) in a highly-miniature instrument enables a powerful approach to the detection and characterization of aromatic organics in remote terrestrial and planetary applications. Tunable detection of parent and fragment ions with high mass resolution, diagnostic of molecular structure, is possible on such a compact L2MS instrument. Selectivity of L2MS against low-mass inorganic salt interferences is a key advantage when working with unprocessed, natural samples, and a mechanism for the observed selectivity is presented.
DEFF Research Database (Denmark)
Kubat, Irnis; Agger, Christian; Møller, Uffe Visbech
2014-01-01
We present numerical modeling of mid-infrared (MIR) supercontinuum generation (SCG) in dispersion-optimized chalcogenide (CHALC) step-index fibres (SIFs) with exceptionally high numerical aperture (NA) around one, pumped with mode-locked praseodymium-doped (Pr3+) chalcogenide fibre lasers. The 4...... for the highest NA considered but required pumping at 4.7kW as well as up to 3m of fibre to compensate for the lower nonlinearities. The amount of power converted into the 8-10 μm band was 7.5 and 8.8mW for the 8 and 10μm fibres, respectively. For the 20μm core fibres up to 46mW was converted....
Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim
2017-02-01
A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.
Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time.
Directory of Open Access Journals (Sweden)
Robert M Kaplan
Full Text Available We explore whether the number of null results in large National Heart Lung, and Blood Institute (NHLBI funded trials has increased over time.We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality.17 of 30 studies (57% published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8% trials published after 2000 (χ2=12.2,df= 1, p=0.0005. There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.
International Nuclear Information System (INIS)
Mirzaee, Hossein
2009-01-01
The Levenberg-Marquardt learning algorithm is applied for training a multilayer perception with three hidden layer each with ten neurons in order to carefully map the structure of chaotic time series such as Mackey-Glass time series. First the MLP network is trained with 1000 data, and then it is tested with next 500 data. After that the trained and tested network is applied for long-term prediction of next 120 data which come after test data. The prediction is such a way that, the first inputs to network for prediction are the four last data of test data, then the predicted value is shifted to the regression vector which is the input to the network, then after first four-step of prediction, the input regression vector to network is fully predicted values and in continue, each predicted data is shifted to input vector for subsequent prediction.
Directory of Open Access Journals (Sweden)
Shanming Wang
2015-01-01
Full Text Available Now electric machines integrate with power electronics to form inseparable systems in lots of applications for high performance. For such systems, two kinds of nonlinearities, the magnetic nonlinearity of iron core and the circuit nonlinearity caused by power electronics devices, coexist at the same time, which makes simulation time-consuming. In this paper, the multiloop model combined with FE model of AC-DC synchronous generators, as one example of electric machine with power electronics system, is set up. FE method is applied for magnetic nonlinearity and variable-step variable-topology simulation method is applied for circuit nonlinearity. In order to improve the simulation speed, the incomplete Cholesky conjugate gradient (ICCG method is used to solve the state equation. However, when power electronics device switches off, the convergence difficulty occurs. So a straightforward approach to achieve convergence of simulation is proposed. At last, the simulation results are compared with the experiments.
Energy Technology Data Exchange (ETDEWEB)
Fijany, A. [Jet Propulsion Lab., Pasadena, CA (United States); Coley, T.R. [Virtual Chemistry, Inc., San Diego, CA (United States); Cagin, T.; Goddard, W.A. III [California Institute of Technology, Pasadena, CA (United States)
1997-12-31
Successful molecular dynamics (MD) simulation of large systems (> million atoms) for long times (> nanoseconds) requires the integration of constrained equations of motion (CEOM). Constraints are used to eliminate high frequency degrees of freedom (DOF) and to allow the use of rigid bodies. Solving the CEOM allows for larger integration time-steps and helps focus the simulation on the important collective dynamics of chemical, biological, and materials systems. We explore advances in multibody dynamics which have resulted in O(N) algorithms for propagating the CEOM. However, because of their strictly sequential nature, the computational time required by these algorithms does not scale down with increased numbers of processors. We then present the new constraint force algorithm for solving the CEOM and show that this algorithm is fully parallelizable, leading to a computational cost of O(N/P+IogP) for N DOF on P processors.
Replicability of time-varying connectivity patterns in large resting state fMRI samples.
Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D
2017-12-01
The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Data transfer over the wide area network with a large round trip time
Matsunaga, H.; Isobe, T.; Mashimo, T.; Sakamoto, H.; Ueda, I.
2010-04-01
A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 290ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the TCP window size. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput.
Data transfer over the wide area network with a large round trip time
International Nuclear Information System (INIS)
Matsunaga, H; Isobe, T; Mashimo, T; Sakamoto, H; Ueda, I
2010-01-01
A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 290ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the TCP window size. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput.
Directory of Open Access Journals (Sweden)
Giorgos Minas
2017-07-01
Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.
On the problem of earthquake correlation in space and time over large distances
Georgoulas, G.; Konstantaras, A.; Maravelakis, E.; Katsifarakis, E.; Stylios, C. D.
2012-04-01
A quick examination of geographical maps with the epicenters of earthquakes marked on them reveals a strong tendency of these points to form compact clusters of irregular shapes and various sizes often traversing with other clusters. According to [Saleur et al. 1996] "earthquakes are correlated in space and time over large distances". This implies that seismic sequences are not formatted randomly but they follow a spatial pattern with consequent triggering of events. Seismic cluster formation is believed to be due to underlying geological natural hazards, which: a) act as the energy storage elements of the phenomenon, and b) tend to form a complex network of numerous interacting faults [Vallianatos and Tzanis, 1998]. Therefore it is imperative to "isolate" meaningful structures (clusters) in order to mine information regarding the underlying mechanism and at a second stage to test the causality effect implied by what is known as the Domino theory [Burgman, 2009]. Ongoing work by Konstantaras et al. 2011 and Katsifarakis et al. 2011 on clustering seismic sequences in the area of the Southern Hellenic Arc and progressively throughout the Greek vicinity and the entire Mediterranean region based on an explicit segmentation of the data based both on their temporal and spatial stamp, following modelling assumptions proposed by Dobrovolsky et al. 1989 and Drakatos et al. 2001, managed to identify geologically validated seismic clusters. These results suggest that that the time component should be included as a dimension during the clustering process as seismic cluster formation is dynamic and the emerging clusters propagate in time. Another issue that has not been investigated yet explicitly is the role of the magnitude of each seismic event. In other words the major seismic event should be treated differently compared to pre or post seismic sequences. Moreover the sometimes irregular and elongated shapes that appear on geophysical maps means that clustering algorithms
Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun
2017-09-19
In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.
International Nuclear Information System (INIS)
Kimura, Fumiko; Umezawa, Tatsuo; Asano, Tomonari; Chihara, Ruri; Nishi, Naoko; Nishimura, Shigeyoshi; Sakai, Fumikazu
2010-01-01
We compared stair-step artifacts and radiation dose between prospective electrocardiography (ECG)-gated coronary computed tomography angiography (prospective CCTA) and retrospective CCTA using 64-detector CT and determined the optimal padding time (PT) for prospective CCTA. We retrospectively evaluated 183 patients [mean heart rate (HR) <65 beats/min, maximum HR instability <5 beats/min] who had undergone CCTA. We scored stair-step artifacts from 1 (severe) to 5 (none) and evaluated the effective dose in 53 patients with retrospective CCTA and 130 with prospective CCTA (PT 200 ms, n=32; PT 50 ms, n=98). Mean artifact scores were 4.3 in both retrospective and prospective CCTAs. However, statistically more arteries scored <3 (nonassessable) on prospective CCTA (P<0.001). Mean scores for prospective CCTA with 200- and 50-ms PT were 4.1 and 4.3, respectively (no significant difference). The radiation dose of prospective CCTA was reduced by 59.1% to 80.7%. Prospective CCTA reduces the radiation dose and allows diagnostic imaging in most cases but shows more nonevaluable artifacts than retrospective CCTA. Use of 50-ms instead of 200-ms PT appears to maintain image quality in patients with a mean HR <65 beats/min and HR instability of <5 beats/min. (author)
International Nuclear Information System (INIS)
Carrander, Claes; Mousavi, Seyed Ali; Engdahl, Göran
2017-01-01
In many transformer applications, it is necessary to have a core magnetization model that takes into account both magnetic and electrical effects. This becomes particularly important in three-phase transformers, where the zero-sequence impedance is generally high, and therefore affects the magnetization very strongly. In this paper, we demonstrate a time-step topological simulation method that uses a lumped-element approach to accurately model both the electrical and magnetic circuits. The simulation method is independent of the used hysteresis model. In this paper, a hysteresis model based on the first-order reversal-curve has been used. - Highlights: • A lumped-element method for modelling transformers i demonstrated. • The method can include hysteresis and arbitrarily complex geometries. • Simulation results for one power transformer are compared to measurements. • An analytical curve-fitting expression for static hysteresis loops is shown.
In-situ high resolution particle sampling by large time sequence inertial spectrometry
International Nuclear Information System (INIS)
Prodi, V.; Belosi, F.
1990-09-01
In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)
REM-3D Reference Datasets: Reconciling large and diverse compilations of travel-time observations
Moulik, P.; Lekic, V.; Romanowicz, B. A.
2017-12-01
A three-dimensional Reference Earth model (REM-3D) should ideally represent the consensus view of long-wavelength heterogeneity in the Earth's mantle through the joint modeling of large and diverse seismological datasets. This requires reconciliation of datasets obtained using various methodologies and identification of consistent features. The goal of REM-3D datasets is to provide a quality-controlled and comprehensive set of seismic observations that would not only enable construction of REM-3D, but also allow identification of outliers and assist in more detailed studies of heterogeneity. The community response to data solicitation has been enthusiastic with several groups across the world contributing recent measurements of normal modes, (fundamental mode and overtone) surface waves, and body waves. We present results from ongoing work with body and surface wave datasets analyzed in consultation with a Reference Dataset Working Group. We have formulated procedures for reconciling travel-time datasets that include: (1) quality control for salvaging missing metadata; (2) identification of and reasons for discrepant measurements; (3) homogenization of coverage through the construction of summary rays; and (4) inversions of structure at various wavelengths to evaluate inter-dataset consistency. In consultation with the Reference Dataset Working Group, we retrieved the station and earthquake metadata in several legacy compilations and codified several guidelines that would facilitate easy storage and reproducibility. We find strong agreement between the dispersion measurements of fundamental-mode Rayleigh waves, particularly when made using supervised techniques. The agreement deteriorates substantially in surface-wave overtones, for which discrepancies vary with frequency and overtone number. A half-cycle band of discrepancies is attributed to reversed instrument polarities at a limited number of stations, which are not reflected in the instrument response history
Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.
2014-04-01
When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to
Herrendoerfer, R.; van Dinther, Y.; Gerya, T.
2015-12-01
To explore the relationships between subduction dynamics and the megathrust earthquake potential, we have recently developed a numerical model that bridges the gap between processes on geodynamic and earthquake cycle time scales. In a self-consistent, continuum-based framework including a visco-elasto-plastic constitutive relationship, cycles of megathrust earthquake-like ruptures were simulated through a purely slip rate-dependent friction, albeit with very low slip rates (van Dinther et al., JGR, 2013). In addition to much faster earthquakes, a range of aseismic slip processes operate at different time scales in nature. These aseismic processes likely accommodate a considerable amount of the plate convergence and are thus relevant in order to estimate the long-term seismic coupling and related hazard in subduction zones. To simulate and resolve this wide spectrum of slip processes, we innovatively implemented rate-and state dependent friction (RSF) and an adaptive time-stepping into our continuum framework. The RSF formulation, in contrast to our previous friction formulation, takes the dependency of frictional strength on a state variable into account. It thereby allows for continuous plastic yielding inside rate-weakening regions, which leads to aseismic slip. In contrast to the conventional RSF formulation, we relate slip velocities to strain rates and use an invariant formulation. Thus we do not require the a priori definition of infinitely thin, planar faults in a homogeneous elastic medium. With this new implementation of RSF, we succeed to produce consistent cycles of frictional instabilities. By changing the frictional parameter a, b, and the characteristic slip distance, we observe a transition from stable sliding to stick-slip behaviour. This transition is in general agreement with predictions from theoretical estimates of the nucleation size, thereby to first order validating our implementation. By incorporating adaptive time-stepping based on a
Cakar, N; Tuŏrul, M; Demirarslan, A; Nahum, A; Adams, A; Akýncý, O; Esen, F; Telci, L
2001-04-01
To determine the time required for the partial pressure of arterial oxygen (PaO2) to reach equilibrium after a 0.20 increment or decrement in fractional inspired oxygen concentration (FIO2) during mechanical ventilation. A multi-disciplinary ICU in a university hospital. Twenty-five adult, non-COPD patients with stable blood gas values (PaO2/FIO2 > or = 180 on the day of the study) on pressure-controlled ventilation (PCV). Following a baseline PaO2 (PaO2b) measurement at FIO2 = 0.35, the FIO2 was increased to 0.55 for 30 min and then decreased to 0.35 without any other change in ventilatory parameters. Sequential blood gas measurements were performed at 3, 5, 7, 9, 11, 15, 20, 25 and 30 min in both periods. The PaO2 values measured at the 30th min after a step change in FIO2 (FIO2 = 0.55, PaO2[55] and FIO2 = 0.35, PaO2[35]) were accepted as representative of the equilibrium values for PaO2. Each patient's rise and fall in PaO2 over time, PaO2(t), were fitted to the following respective exponential equations: PaO2b + (PaO2[55]-PaO2b)(1-e-kt) and PaO2[55] + (PaO2[35]-PaO2[55])(e-kt) where "t" refers to time, PaO2[55] and PaO2[35] are the final PaO2 values obtained at a new FIO2 of 0.55 and 0.35, after a 0.20 increment and decrement in FIO2, respectively. Time constant "k" was determined by a non-linear fitting curve and 90% oxygenation times were defined as the time required to reach 90% of the final equilibrated PaO2 calculated by using the non-linear fitting curves. Time constant values for the rise and fall periods were 1.01 +/- 0.71 min-1, 0.69 +/- 0.42 min-1, respectively, and 90% oxygenation times for rises and falls in PaO2 periods were 4.2 +/- 4.1 min-1 and 5.5 +/- 4.8 min-1, respectively. There was no significant difference between the rise and fall periods for the two parameters (p > 0.05). We conclude that in stable patients ventilated with PCV, after a step change in FIO2 of 0.20, 5-10 min will be adequate for obtaining a blood gas sample to measure a Pa
Directory of Open Access Journals (Sweden)
Chin-Yi Tsai
2014-01-01
Full Text Available In this work, tandem amorphous/microcrystalline silicon thin-film large-area see-through color solar modules were successfully designed and developed for building-integrated photovoltaic applications. Novel and key technologies of reflective layers and 4-step laser scribing were researched, developed, and introduced into the production line to produce solar panels with various colors, such as purple, dark blue, light blue, silver, golden, orange, red wine, and coffee. The highest module power is 105 W and the highest visible light transmittance is near 20%.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.
Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data
Huai, J.; Zhang, Y.; Yilmaz, A.
2015-08-01
Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.
A large deviations approach to limit theory for heavy-tailed time series
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Wintenberger, Olivier
2016-01-01
and vanishing in some neighborhood of the origin. We study a variety of such functionals, including large deviations of random walks, their suprema, the ruin functional, and further derive weak limit theory for maxima, point processes, cluster functionals and the tail empirical process. One of the main results...
Keith Jennings; Julia A. Jones
2015-01-01
This study tested multiple hydrologic mechanisms to explain snowpack dynamics in extreme rain-on-snow floods, which occur widely in the temperate and polar regions. We examined 26, 10 day large storm events over the period 1992â2012 in the H.J. Andrews Experimental Forest in western Oregon, using statistical analyses (regression, ANOVA, and wavelet coherence) of hourly...
The part-time wage penalty in European countries: how large is it for men?
O'Dorchai, Sile Padraigin; Plasman, Robert; Rycx, François
2007-01-01
Economic theory advances a number of reasons for the existence of a wage gap between part-time and full-time workers. Empirical work has concentrated on the wage effects of part-time work for women. For men, much less empirical evidence exists, mainly because of lacking data. In this paper, we take advantage of access to unique harmonised matched employer-employee data (i.e. the 1995 European Structure of Earnings Survey) to investigate the magnitude and sources of the part-time wage penalty ...
Mao, Y.; Crow, W. T.; Nijssen, B.
2017-12-01
Soil moisture (SM) plays an important role in runoff generation both by partitioning infiltration and surface runoff during rainfall events and by controlling the rate of subsurface flow during inter-storm periods. Therefore, more accurate SM state estimation in hydrologic models is potentially beneficial for streamflow prediction. Various previous studies have explored the potential of assimilating SM data into hydrologic models for streamflow improvement. These studies have drawn inconsistent conclusions, ranging from significantly improved runoff via SM data assimilation (DA) to limited or degraded runoff. These studies commonly treat the whole assimilation procedure as a black box without separating the contribution of each step in the procedure, making it difficult to attribute the underlying causes of runoff improvement (or the lack thereof). In this study, we decompose the overall DA process into three steps by answering the following questions (3-step framework): 1) how much can assimilation of surface SM measurements improve surface SM state in a hydrologic model? 2) how much does surface SM improvement propagate to deeper layers? 3) How much does (surface and deeper-layer) SM improvement propagate into runoff improvement? A synthetic twin experiment is carried out in the Arkansas-Red River basin ( 600,000 km2) where a synthetic "truth" run, an open-loop run (without DA) and a DA run (where synthetic surface SM measurements are assimilated) are generated. All model runs are performed at 1/8 degree resolution and over a 10-year period using the Variable Infiltration Capacity (VIC) hydrologic model at a 3-hourly time step. For the DA run, the ensemble Kalman filter (EnKF) method is applied. The updated surface and deeper-layer SM states with DA are compared to the open-loop SM to quantitatively evaluate the first two steps in the framework. To quantify the third step, a set of perfect-state runs are generated where the "true" SM states are directly inserted
Directory of Open Access Journals (Sweden)
Regula Morgenegg
2018-01-01
retrospective study examined the OR turnaround data of 875 elective surgery cases scheduled at the Marienhospital, Vechta, Germany, between July and October 2014. The frequency distributions of planned and actual OR turnaround times were compared and correlations between turnaround times and various factors were established, including the time of day of the procedure, patient age and the planned duration of the surgery. Results: There was a significant difference between mean planned and actual OR turnaround times (0.32 versus 0.64 hours; P <0.001. In addition, significant correlations were noted between actual OR turnaround times and the time of day of the surgery, patient age, actual duration of the procedure and staffing changes affecting the surgeon or the medical specialty of the surgery (P <0.001 each. The quotient of actual/planned OR turnaround times ranged from 1.733–3.000. Conclusion: Significant discrepancies between planned and actual OR turnaround times were noted during the study period. Such findings may be potentially used in future studies to establish a tool to improve OR planning, measure OR management performance and enable benchmarking.
Desvillettes, Laurent
2010-01-01
We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one-dimensional domain with no-flux boundary conditions. In particular, we consider size-dependent diffusion coefficients, which may degenerate for small and large cluster-sizes. We prove that the entropy-entropy dissipation method applies directly in this inhomogeneous setting. We first show the necessary basic a priori estimates in dimension one, and second we show faster-than-polynomial convergence toward global equilibria for diffusion coefficients which vanish not faster than linearly for large sizes. This extends the previous results of [J.A. Carrillo, L. Desvillettes, and K. Fellner, Comm. Math. Phys., 278 (2008), pp. 433-451], which assumes that the diffusion coefficients are bounded below. © 2009 Society for Industrial and Applied Mathematics.
Large time asymptotics of solutions to the anharmonic oscillator model from nonlinear optics
Jochmann, Frank
2005-01-01
The anharmonic oscillator model describing the propagation of electromagnetic waves in an exterior domain containing a nonlinear dielectric medium is investigated. The system under consideration consists of a generally nonlinear second order differential equation for the dielectrical polarization coupled with Maxwell's equations for the electromagnetic field. Local decay of the electromagnetic field for t to infinity in the charge free case is shown for a large class of potentials. (This pape...
Energy Technology Data Exchange (ETDEWEB)
Laitinen, T.
2013-11-01
This thesis is based on the construction of a two-step laser desorption-ionization aerosol time-of-flight mass spectrometer (laser AMS), which is capable of measuring 10 to 50 nm aerosol particles collected from urban and rural air at-site and in near real time. The operation and applicability of the instrument was tested with various laboratory measurements, including parallel measurements with filter collection/chromatographic analysis, and then in field experiments in urban environment and boreal forest. Ambient ultrafine aerosol particles are collected on a metal surface by electrostatic precipitation and introduced to the time-of-flight mass spectrometer (TOF-MS) with a sampling valve. Before MS analysis particles are desorbed from the sampling surface with an infrared laser and ionized with a UV laser. The formed ions are guided to the TOF-MS by ion transfer optics, separated according to their m/z ratios, and detected with a micro channel plate detector. The laser AMS was used in urban air studies to quantify the carbon cluster content in 50 nm aerosol particles. Standards for the study were produced from 50 nm graphite particles, suspended in toluene, with 72 hours of high power sonication. The results showed the average amount of carbon clusters (winter 2012, Helsinki, Finland) in 50 nm particles to be 7.2% per sample. Several fullerenes/fullerene fragments were detected during the measurements. In boreal forest measurements, the laser AMS was capable of detecting several different organic species in 10 to 50 nm particles. These included nitrogen-containing compounds, carbon clusters, aromatics, aliphatic hydrocarbons, and oxygenated hydrocarbons. A most interesting event occurred during the boreal forest measurements in spring 2011 when the chemistry of the atmosphere clearly changed during snow melt. On that time concentrations of laser AMS ions m/z 143 and 185 (10 nm particles) increased dramatically. Exactly at the same time, quinoline concentrations
Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali
2015-11-01
We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.
Kaald, Rune; Eggen, Trym; Ytterdal, Trond
2017-02-01
Fully digitized 2D ultrasound transducer arrays require one ADC per channel with a beamforming architecture consuming low power. We give design considerations for per-channel digitization and beamforming, and present the design and measurements of a continuous time delta-sigma modulator (CTDSM) for cardiac ultrasound applications. By integrating a mixer into the modulator frontend, the phase and frequency of the input signal can be shifted, thereby enabling both improved conversion efficiency and narrowband beamforming. To minimize the power consumption, we propose an optimization methodology using a simulated annealing framework combined with a C++ simulator solving linear electrical networks. The 3rd order single-bit feedback type modulator, implemented in a 65 nm CMOS process, achieves an SNR/SNDR of 67.8/67.4 dB across 1 MHz bandwidth consuming 131 [Formula: see text] of power. The achieved figure of merit of 34.2 fJ/step is comparable with state-of-the-art feedforward type multi-bit designs. We further demonstrate the influence to the dynamic range when performing dynamic receive beamforming on recorded delta-sigma modulated bit-stream sequences.
Interactive exploration of large-scale time-varying data using dynamic tracking graphs
Widanagamaachchi, W.; Christensen, C.; Bremer, P.-T; Pascucci, Valerio
2012-01-01
that use one spatial dimension to indicate time and show the "tracks" of each feature as it evolves, merges or disappears. However, for practical data sets creating the corresponding optimal graph layouts that minimize the number of intersections can take
The ''Flight Chamber'': A fast, large area, zero-time detector
International Nuclear Information System (INIS)
Trautner, N.
1976-01-01
A new, fast, zero-time detector with an active area of 20 cm 2 has been constructed. Secondary electrons from a thin self-supporting foil are accelerated onto a scinitllator. The intrinsic time resolution (fwhm) was 0.85 for 5.5 MeV α-particles and 0.42 ns for 17 MeV 16 O-ions, at an efficiency of 97.5% and 99.6%, respectively. (author)
Seeber, P A; Franz, M; Dehnhard, M; Ganswindt, A; Greenwood, A D; East, M L
2018-04-20
Adverse environmental stimuli (stressors) activate the hypothalamic-pituitary-adrenal axis and contribute to allostatic load. This study investigates the contribution of environmental stressors and life history stage to allostatic load in a migratory population of plains zebras (Equus quagga) in the Serengeti ecosystem, in Tanzania, which experiences large local variations in aggregation. We expected higher fGCM response to the environmental stressors of feeding competition, predation pressure and unpredictable social relationships in larger than in smaller aggregations, and in animals at energetically costly life history stages. As the study was conducted during the 2016 El Niño, we did not expect food quality of forage or a lack of water to strongly affect fGCM responses in the dry season. We measured fecal glucocorticoid metabolite (fGCM) concentrations using an enzyme immunoassay (EIA) targeting 11β-hydroxyetiocholanolone and validated its reliability in captive plains zebras. Our results revealed significantly higher fGCM concentrations 1) in large aggregations than in smaller groupings, and 2) in band stallions than in bachelor males. Concentrations of fGCM were not significantly higher in females at the energetically costly life stage of late pregnancy/lactation. The higher allostatic load of stallions associated with females, than bachelor males is likely caused by social stressors. In conclusion, migratory zebras have elevated allostatic loads in large aggregations that probably result from their combined responses to increased feeding competition, predation pressure and various social stressors. Further research is required to disentangle the contribution of these stressors to allostatic load in migratory populations. Copyright © 2018 Elsevier Inc. All rights reserved.
A hybrid adaptive large neighborhood search heuristic for lot-sizing with setup times
DEFF Research Database (Denmark)
Muller, Laurent Flindt; Spoorendonk, Simon; Pisinger, David
2012-01-01
This paper presents a hybrid of a general heuristic framework and a general purpose mixed-integer programming (MIP) solver. The framework is based on local search and an adaptive procedure which chooses between a set of large neighborhoods to be searched. A mixed integer programming solver and its......, and the upper bounds found by the commercial MIP solver ILOG CPLEX using state-of-the-art MIP formulations. Furthermore, we improve the best known solutions on 60 out of 100 and improve the lower bound on all 100 instances from the literature...
International Nuclear Information System (INIS)
1980-10-01
This book is divided into three parts, which is about practical using of stepping motor. The first part has six chapters. The contents of the first part are about stepping motor, classification of stepping motor, basic theory og stepping motor, characteristic and basic words, types and characteristic of stepping motor in hybrid type and basic control of stepping motor. The second part deals with application of stepping motor with hardware of stepping motor control, stepping motor control by microcomputer and software of stepping motor control. The last part mentions choice of stepping motor system, examples of stepping motor, measurement of stepping motor and practical cases of application of stepping motor.
Modelling and Formal Verification of Timing Aspects in Large PLC Programs
Fernandez Adiego, B; Blanco Vinuela, E; Tournier, J-C; Gonzalez Suarez, V M; Blech, J O
2014-01-01
One of the main obstacle that prevents model checking from being widely used in industrial control systems is the complexity of building formal models out of PLC programs, especially when timing aspects need to be integrated. This paper brings an answer to this obstacle by proposing a methodology to model and verify timing aspects of PLC programs. Two approaches are proposed to allow the users to balance the trade-off between the complexity of the model, i.e. its number of states, and the set of specifications possible to be verified. A tool supporting the methodology which allows to produce models for different model checkers directly from PLC programs has been developed. Verification of timing aspects for real-life PLC programs are presented in this paper using NuSMV.
Quasi real-time estimation of the moment magnitude of large earthquake from static strain changes
Itaba, S.
2016-12-01
The 2011 Tohoku-Oki (off the Pacific coast of Tohoku) earthquake, of moment magnitude 9.0, was accompanied by large static strain changes (10-7), as measured by borehole strainmeters operated by the Geological Survey of Japan in the Tokai, Kii Peninsula, and Shikoku regions. A fault model for the earthquake on the boundary between the Pacific and North American plates, based on these borehole strainmeter data, yielded a moment magnitude of 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency (JMA) announced just after earthquake occurrence was 7.9. Such geodetic moment magnitudes, derived from static strain changes, can be estimated almost as rapidly as determinations using seismic waves. I have to verify the validity of this method in some cases. In the case of this earthquake's largest aftershock, which occurred 29 minutes after the mainshock. The prompt report issued by JMA assigned this aftershock a magnitude of 7.3, whereas the moment magnitude derived from borehole strain data is 7.6, which is much closer to the actual moment magnitude of 7.7. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using static strain changes is one of the strong methods for rapid estimation of the magnitude of large earthquakes, and useful to improve the accuracy of Earthquake Early Warning.
International Nuclear Information System (INIS)
Liu, H.-L.; Chen, Y.-Y.; Yen, J.-Y.; Lin, W.-L.
2003-01-01
To generate large thermal lesions in ultrasound thermal therapy, cooling intermissions are usually introduced during the treatment to prevent near-field heating, which leads to a long treatment time. A possible strategy to shorten the total treatment time is to eliminate the cooling intermissions. In this study, the two methods, power optimization and acoustic window enlargement, for reducing power accumulation in the near field are combined to investigate the feasibility of continuously heating a large target region (maximally 3.2 x 3.2 x 3.2 cm 3 ). A multiple 1D ultrasound phased array system generates the foci to scan the target region. Simulations show that the target region can be successfully heated without cooling and no near-field heating occurs. Moreover, due to the fact that there is no cooling time during the heating sessions, the total treatment time is significantly reduced to only several minutes, compared to the existing several hours
Time-scale effects in the interaction between a large and a small herbivore
Kuijper, D. P. J.; Beek, P.; van Wieren, S.E.; Bakker, J. P.
2008-01-01
In the short term, grazing will mainly affect plant biomass and forage quality. However, grazing can affect plant species composition by accelerating or retarding succession at longer time-scales. Few studies concerning interactions among herbivores have taken the change in plant species composition
Eulerian short-time statistics of turbulent flow at large Reynolds number
Brouwers, J.J.H.
2004-01-01
An asymptotic analysis is presented of the short-time behavior of second-order temporal velocity structure functions and Eulerian acceleration correlations in a frame that moves with the local mean velocity of the turbulent flow field. Expressions in closed-form are derived which cover the viscous
Response time distributions in rapid chess: A large-scale decision making experiment
Directory of Open Access Journals (Sweden)
Mariano Sigman
2010-10-01
Full Text Available Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times and position value in rapid chess games. We measured robust emergent statistical observables: 1 Response time (RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, 2 RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.
A fast large-area position-sensitive time-of-flight neutron detection system
International Nuclear Information System (INIS)
Crawford, R.K.; Haumann, J.R.
1989-01-01
A new position-sensitive time-of-flight neutron detection and histograming system has been developed for use at the Intense Pulsed Neutron Source. Spatial resolution of roughly 1 cm x 1 cm and time-of-flight resolution of ∼1 μsec are combined in a detection system which can ultimately be expanded to cover several square meters of active detector area. This system is based on the use of arrays of cylindrical one-dimensional position-sensitive proportional counters, and is capable of collecting the x-y-t data and sorting them into histograms at time-averaged data rates up to ∼300,000 events/sec over the full detector area and with instantaneous data rates up to more than fifty times that. Numerous hardware features have been incorporated to facilitate initial tuning of the position encoding, absolute calibration of the encoded positions, and automatic testing for drifts. 7 refs., 11 figs., 1 tabs
Citizen journalism in a time of crisis: lessons from a large-scale California wildfire
S. Gillette; J. Taylor; D.J. Chavez; R. Hodgson; J. Downing
2007-01-01
The accessibility of news production tools through consumer communication technology has made it possible for media consumers to become media producers. The evolution of media consumer to media producer has important implications for the shape of public discourse during a time of crisis. Citizen journalists cover crisis events using camera cell phones and digital...
Madison, G.; Mosing, M.A.; Verweij, K.J.H.; Pedersen, N.L.; Ullén, F.
2016-01-01
Intelligence and cognitive ability have long been associated with chronometric performance measures, such as reaction time (RT), but few studies have investigated auditory RT in this context. The nature of this relationship is important for understanding the etiology and structure of intelligence.
Near real-time large scale (sensor) data provisioning for PLF
Vonder, M.R.; Waaij, B.D. van der; Harmsma, E.J.; Donker, G.
2015-01-01
Think big, start small. With that thought in mind, Smart Dairy Farming (SDF) developed a platform to make real-time sensor data from different farms available, for model developers to support dairy farmers in Precision Livestock Farming. The data has been made available via a standard interface on
Cagnetti, Filippo; Gomes, Diogo A.; Mitake, Hiroyoshi; Tran, Hung V.
2015-01-01
We investigate large-time asymptotics for viscous Hamilton-Jacobi equations with possibly degenerate diffusion terms. We establish new results on the convergence, which are the first general ones concerning equations which are neither uniformly parabolic nor first order. Our method is based on the nonlinear adjoint method and the derivation of new estimates on long time averaging effects. It also extends to the case of weakly coupled systems.
Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk
2017-06-27
Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.
A large scale flexible real-time communications topology for the LHC accelerator
Lauckner, R J; Ribeiro, P; Wijnands, Thijs
1999-01-01
The LHC design parameters impose very stringent beam control requirements in order to reach the nominal performance. Prompted by the lack of accurate models to predict field behaviour in superconducting magnet systems the control system of the accelerator will provide flexible feedback channels between monitors and magnets around the 27 Km circumference machine. The implementation of feedback systems composed of a large number of sparsely located elements presents some interesting challenges. Our goal was to find a topology where the control loop requirements: number and distribution of nodes, latency and throughput could be guaranteed without compromising the flexibility. Our proposal is to federate a number of well known technologies and concepts, namely ATM, WorldFIP and RTOS, into a general framework. (6 refs).
Time-gated ballistic imaging using a large aperture switching beam.
Mathieu, Florian; Reddemann, Manuel A; Palmer, Johannes; Kneer, Reinhold
2014-03-24
Ballistic imaging commonly denotes the formation of line-of-sight shadowgraphs through turbid media by suppression of multiply scattered photons. The technique relies on a femtosecond laser acting as light source for the images and as switch for an optical Kerr gate that separates ballistic photons from multiply scattered ones. The achievable image resolution is one major limitation for the investigation of small objects. In this study, practical influences on the optical Kerr gate and image quality are discussed theoretically and experimentally applying a switching beam with large aperture (D = 19 mm). It is shown how switching pulse energy and synchronization of switching and imaging pulse in the Kerr cell influence the gate's transmission. Image quality of ballistic imaging and standard shadowgraphy is evaluated and compared, showing that the present ballistic imaging setup is advantageous for optical densities in the range of 8 ballistic imaging setup into a schlieren-type system with an optical schlieren edge.
Directory of Open Access Journals (Sweden)
Delogu Mauro
2006-05-01
Full Text Available Abstract Background Avian influenza viruses (AIVs are endemic in wild birds and their introduction and conversion to highly pathogenic avian influenza virus in domestic poultry is a cause of serious economic losses as well as a risk for potential transmission to humans. The ability to rapidly recognise AIVs in biological specimens is critical for limiting further spread of the disease in poultry. The advent of molecular methods such as real time polymerase chain reaction has allowed improvement of detection methods currently used in laboratories, although not all of these methods include an Internal Positive Control (IPC to monitor for false negative results. Therefore we developed a one-step reverse transcription real time PCR (RRT-PCR with a Minor Groove Binder (MGB probe for the detection of different subtypes of AIVs. This technique also includes an IPC. Methods RRT-PCR was developed using an improved TaqMan technology with a MGB probe to detect AI from reference viruses. Primers and probe were designed based on the matrix gene sequences from most animal and human A influenza virus subtypes. The specificity of RRT-PCR was assessed by detecting influenza A virus isolates belonging to subtypes from H1–H13 isolated in avian, human, swine and equine hosts. The analytical sensitivity of the RRT-PCR assay was determined using serial dilutions of in vitro transcribed matrix gene RNA. The use of a rodent RNA as an IPC in order not to reduce the efficiency of the assay was adopted. Results The RRT-PCR assay is capable to detect all tested influenza A viruses. The detection limit of the assay was shown to be between 5 and 50 RNA copies per reaction and the standard curve demonstrated a linear range from 5 to 5 × 108 copies as well as excellent reproducibility. The analytical sensitivity of the assay is 10–100 times higher than conventional RT-PCR. Conclusion The high sensitivity, rapidity, reproducibility and specificity of the AIV RRT-PCR with
Sorger, Bettina; Kamp, Tabea; Weiskopf, Nikolaus; Peters, Judith Caroline; Goebel, Rainer
2018-05-15
Brain-computer interfaces (BCIs) based on real-time functional magnetic resonance imaging (rtfMRI) are currently explored in the context of developing alternative (motor-independent) communication and control means for the severely disabled. In such BCI systems, the user encodes a particular intention (e.g., an answer to a question or an intended action) by evoking specific mental activity resulting in a distinct brain state that can be decoded from fMRI activation. One goal in this context is to increase the degrees of freedom in encoding different intentions, i.e., to allow the BCI user to choose from as many options as possible. Recently, the ability to voluntarily modulate spatial and/or temporal blood oxygenation level-dependent (BOLD)-signal features has been explored implementing different mental tasks and/or different encoding time intervals, respectively. Our two-session fMRI feasibility study systematically investigated for the first time the possibility of using magnitudinal BOLD-signal features for intention encoding. Particularly, in our novel paradigm, participants (n=10) were asked to alternately self-regulate their regional brain-activation level to 30%, 60% or 90% of their maximal capacity by applying a selected activation strategy (i.e., performing a mental task, e.g., inner speech) and modulation strategies (e.g., using different speech rates) suggested by the experimenters. In a second step, we tested the hypothesis that the additional availability of feedback information on the current BOLD-signal level within a region of interest improves the gradual-self regulation performance. Therefore, participants were provided with neurofeedback in one of the two fMRI sessions. Our results show that the majority of the participants were able to gradually self-regulate regional brain activation to at least two different target levels even in the absence of neurofeedback. When provided with continuous feedback on their current BOLD-signal level, most
Interstitial laser photocoagulation for benign thyroid nodules: time to treat large nodules.
Amabile, Gerardo; Rotondi, Mario; Pirali, Barbara; Dionisio, Rosa; Agozzino, Lucio; Lanza, Michele; Buonanno, Luciano; Di Filippo, Bruno; Fonte, Rodolfo; Chiovato, Luca
2011-09-01
Interstitial laser photocoagulation (ILP) is a new therapeutic option for the ablation of non-functioning and hyper-functioning benign thyroid nodules. Amelioration of the ablation procedure currently allows treating large nodules. Aim of this study was to evaluate the therapeutic efficacy of ILP, performed according to a modified protocol of ablation, in patients with large functioning and non-functioning thyroid nodules and to identify the best parameters for predicting successful outcome in hyperthyroid patients. Fifty-one patients with non-functioning thyroid nodules (group 1) and 26 patients with hyperfunctioning thyroid nodules (group 2) were enrolled. All patients had a nodular volume ≥40 ml. Patients were addressed to 1-3 cycles of ILP. A cycle consisted of three ILP sessions, each lasting 5-10 minutes repeated at an interval of 1 month. After each cycle of ILP patients underwent thyroid evaluation. A nodule volume reduction, expressed as percentage of the basal volume, significantly occurred in both groups (F = 190.4; P nodule volume; (iii) total amount of energy delivered expressed in Joule. ROC curves identified the percentage of volume reduction as the best parameter predicting a normalized serum TSH (area under the curve 0.962; P thyroid nodules, both in terms of nodule size reduction and cure of hyperthyroidism (87% of cured patients after the last ILP cycle). ILP should not be limited to patients refusing or being ineligible for surgery and/or radioiodine. Copyright © 2011 Wiley-Liss, Inc.
Giesler, Reiner; Clemmensen, Karina E; Wardle, David A; Klaminder, Jonatan; Bindler, Richard
2017-03-07
Alterations in fire activity due to climate change and fire suppression may have profound effects on the balance between storage and release of carbon (C) and associated volatile elements. Stored soil mercury (Hg) is known to volatilize due to wildfires and this could substantially affect the land-air exchange of Hg; conversely the absence of fires and human disturbance may increase the time period over which Hg is sequestered. Here we show for a wildfire chronosequence spanning over more than 5000 years in boreal forest in northern Sweden that belowground inventories of total Hg are strongly related to soil humus C accumulation (R 2 = 0.94, p millennial time scales in the prolonged absence of fire.
International Nuclear Information System (INIS)
Yong-Jun, Wang; Xiang-Jun, Xin; Xiao-Lei, Zhang; Chong-Qing, Wu; Kuang-Lu, Yu
2010-01-01
Optical buffers are critical for optical signal processing in future optical packet-switched networks. In this paper, a theoretical study as well as an experimental demonstration on a new optical buffer with large dynamical delay time is carried out based on cascaded double loop optical buffers (DLOBs). It is found that pulse distortion can be restrained by a negative optical control mode when the optical packet is in the loop. Noise analysis indicates that it is feasible to realise a large variable delay range by cascaded DLOBs. These conclusions are validated by the experiment system with 4-stage cascaded DLOBs. Both the theoretical simulations and the experimental results indicate that a large delay range of 1–9999 times the basic delay unit and a fine granularity of 25 ns can be achieved by the cascaded DLOBs. The performance of the cascaded DLOBs is suitable for the all optical networks. (classical areas of phenomenology)
Response time distributions in rapid chess: a large-scale decision making experiment.
Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A
2010-01-01
Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.
On large-time energy concentration in solutions to the Navier-Stokes equations in general domains
Czech Academy of Sciences Publication Activity Database
Skalák, Zdeněk
2011-01-01
Roč. 91, č. 9 (2011), s. 724-732 ISSN 0044-2267 R&D Projects: GA AV ČR IAA100190905 Institutional research plan: CEZ:AV0Z20600510 Keywords : Navier-Stokes equations * large-time behavior * energy concentration Subject RIV: BA - General Mathematics Impact factor: 0.863, year: 2011
Association between time perspective and organic food consumption in a large sample of adults.
Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine
2018-01-05
Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of
Tien, Wei-Ping; Lim, Gareth; Yeo, Gladys; Chiang, Suzanna Nicole; Chong, Chee-Seng; Ng, Lee-Ching; Hapuarachchi, Hapuarachchige Chanditha
2017-09-19
The monitoring of vectors is one of the key surveillance measures to assess the risk of arbovirus transmission and the success of control strategies in endemic regions. The recent re-emergence of Zika virus (ZIKV) in the tropics, including Singapore, emphasizes the need to develop cost-effective, rapid and accurate assays to monitor the virus spread by mosquitoes. As ZIKV infections largely remain asymptomatic, early detection of ZIKV in the field-caught mosquitoes enables timely implementation of appropriate mosquito control measures. We developed a rapid, sensitive and specific real-time reverse transcription polymerase chain reaction (rRT-PCR) assay for the detection of ZIKV in field-caught mosquitoes. The primers and PCR cycling conditions were optimized to minimize non-specific amplification due to cross-reactivity with the genomic material of Aedes aegypti, Aedes albopictus, Culex quinquefasciatus, Culex tritaeniorhynchus, Culex sitiens and Anopheles sinensis, as well as accompanying microbiota. The performance of the assay was further evaluated with a panel of flaviviruses and alphaviruses as well as in field-caught Ae. aegypti mosquitoes confirmed to be positive for ZIKV. As compared to a probe-based assay, the newly developed assay demonstrated 100% specificity and comparable detection sensitivity for ZIKV in mosquitoes. Being a SYBR Green-based method, the newly-developed assay is cost-effective and easy to adapt, thus is applicable to large-scale vector surveillance activities in endemic countries, including those with limited resources and expertise. The amplicon size (119 bp) also allows sequencing to confirm the virus type. The primers flank relatively conserved regions of ZIKV genome, so that, the assay is able to detect genetically diverse ZIKV strains. Our findings, therefore, testify the potential use of the newly-developed assay in vector surveillance programmes for ZIKV in endemic regions.
Tracking Large Area Mangrove Deforestation with Time-Series of High Fidelity MODIS Imagery
Rahman, A. F.; Dragoni, D.; Didan, K.
2011-12-01
Mangrove forests are important coastal ecosystems of the tropical and subtropical regions. These forests provide critical ecosystem services, fulfill important socio-economic and environmental functions, and support coastal livelihoods. But these forest are also among the most vulnerable ecosystems, both to anthropogenic disturbance and climate change. Yet, there exists no map or published study showing detailed spatiotemporal trends of mangrove deforestation at local to regional scales. There is an immediate need of producing such detailed maps to further study the drivers, impacts and feedbacks of anthropogenic and climate factors on mangrove deforestation, and to develop local and regional scale adaptation/mitigation strategies. In this study we use a time-series of high fidelity imagery from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) for tracking changes in the greenness of mangrove forests of Kalimantan Island of Indonesia. A novel method of filtering satellite data for cloud, aerosol, and view angle effects was used to produce high fidelity MODIS time-series images at 250-meter spatial resolution and three-month temporal resolution for the period of 2000-2010. Enhanced Vegetation Index 2 (EVI2), a measure of vegetation greenness, was calculated from these images for each pixel at each time interval. Temporal variations in the EVI2 of each pixel were tracked as a proxy to deforestaton of mangroves using the statistical method of change-point analysis. Results of these change detection were validated using Monte Carlo simulation, photographs from Google-Earth, finer spatial resolution images from Landsat satellite, and ground based GIS data.
Blanchet, Adrien
2009-01-01
A periodic perturbation of a Gaussian measure modifies the sharp constants in Poincarae and logarithmic Sobolev inequalities in the homogeniz ation limit, that is, when the period of a periodic perturbation converges to zero. We use variational techniques to determine the homogenized constants and get optimal convergence rates toward s equilibrium of the solutions of the perturbed diffusion equations. The study of these sharp constants is motivated by the study of the stochastic Stokes\\' drift. It also applies to Brownian ratchets and molecular motors in biology. We first establish a transport phenomenon. Asymptotically, the center of mass of the solution moves with a constant velocity, which is determined by a doubly periodic problem. In the reference frame attached to the center of mass, the behavior of the solution is governed at large scale by a diffusion with a modified diffusion coefficient. Using the homogenized logarithmic Sobolev inequality, we prove that the solution converges in self-similar variables attached to t he center of mass to a stationary solution of a Fokker-Planck equation modulated by a periodic perturbation with fast oscillations, with an explicit rate. We also give an asymptotic expansion of the traveling diffusion front corresponding to the stochastic Stokes\\' drift with given potential flow. © 2009 Society for Industrial and Applied Mathematics.
Time-Efficient High-Resolution Large-Area Nano-Patterning of Silicon Dioxide
DEFF Research Database (Denmark)
Lin, Li; Ou, Yiyu; Aagesen, Martin
2017-01-01
A nano-patterning approach on silicon dioxide (SiO2) material, which could be used for the selective growth of III-V nanowires in photovoltaic applications, is demonstrated. In this process, a silicon (Si) stamp with nanopillar structures was first fabricated using electron-beam lithography (EBL....... In addition, high time efficiency can be realized by one-spot electron-beam exposure in the EBL process combined with NIL for mass production. Furthermore, the one-spot exposure enables the scalability of the nanostructures for different application requirements by tuning only the exposure dose. The size...
Parallel real-time visualization system for large-scale simulation. Application to WSPEEDI
International Nuclear Information System (INIS)
Muramatsu, Kazuhiro; Otani, Takayuki; Kitabata, Hideyuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun
2000-01-01
The real-time visualization system, PATRAS (PArallel TRAcking Steering system) has been developed on parallel computing servers. The system performs almost all of the visualization tasks on a parallel computing server, and uses image data compression technique for efficient communication between the server and the client terminal. Therefore, the system realizes high performance concurrent visualization in an internet computing environment. The experience in applying PATRAS to WSPEEDI (Worldwide version of System for Prediction Environmental Emergency Dose Information) is reported. The application of PATRAS to WSPEEDI enables users to understand behaviours of radioactive tracers from different release points easily and quickly. (author)
Effects of walking speed on the step-by-step control of step width.
Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C
2018-02-08
Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.
A reference web architecture and patterns for real-time visual analytics on large streaming data
Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer
2013-12-01
Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.
Detection of long nulls in PSR B1706-16, a pulsar with large timing irregularities
Naidu, Arun; Joshi, Bhal Chandra; Manoharan, P. K.; Krishnakumar, M. A.
2018-04-01
Single pulse observations, characterizing in detail, the nulling behaviour of PSR B1706-16 are being reported for the first time in this paper. Our regular long duration monitoring of this pulsar reveals long nulls of 2-5 h with an overall nulling fraction of 31 ± 2 per cent. The pulsar shows two distinct phases of emission. It is usually in an active phase, characterized by pulsations interspersed with shorter nulls, with a nulling fraction of about 15 per cent, but it also rarely switches to an inactive phase, consisting of long nulls. The nulls in this pulsar are concurrent between 326.5 and 610 MHz. Profile mode changes accompanied by changes in fluctuation properties are seen in this pulsar, which switches from mode A before a null to mode B after the null. The distribution of null durations in this pulsar is bimodal. With its occasional long nulls, PSR B1706-16 joins the small group of intermediate nullers, which lie between the classical nullers and the intermittent pulsars. Similar to other intermediate nullers, PSR B1706-16 shows high timing noise, which could be due to its rare long nulls if one assumes that the slowdown rate during such nulls is different from that during the bursts.
Across Space and Time: Social Responses to Large-Scale Biophysical Systems
Macmynowski, Dena P.
2007-06-01
The conceptual rubric of ecosystem management has been widely discussed and deliberated in conservation biology, environmental policy, and land/resource management. In this paper, I argue that two critical aspects of the ecosystem management concept require greater attention in policy and practice. First, although emphasis has been placed on the “space” of systems, the “time”—or rates of change—associated with biophysical and social systems has received much less consideration. Second, discussions of ecosystem management have often neglected the temporal disconnects between changes in biophysical systems and the response of social systems to management issues and challenges. The empirical basis of these points is a case study of the “Crown of the Continent Ecosystem,” an international transboundary area of the Rocky Mountains that surrounds Glacier National Park (USA) and Waterton Lakes National Park (Canada). This project assessed the experiences and perspectives of 1) middle- and upper-level government managers responsible for interjurisdictional cooperation, and 2) environmental nongovernment organizations with an international focus. I identify and describe 10 key challenges to increasing the extent and intensity of transboundary cooperation in land/resource management policy and practice. These issues are discussed in terms of their political, institutional, cultural, information-based, and perceptual elements. Analytic techniques include a combination of environmental history, semistructured interviews with 48 actors, and text analysis in a systematic qualitative framework. The central conclusion of this work is that the rates of response of human social systems must be better integrated with the rates of ecological change. This challenge is equal to or greater than the well-recognized need to adapt the spatial scale of human institutions to large-scale ecosystem processes and transboundary wildlife.
Ethical dilemmas of a large national multi-centre study in Australia: time for some consistency.
Driscoll, Andrea; Currey, Judy; Worrall-Carter, Linda; Stewart, Simon
2008-08-01
To examine the impact and obstacles that individual Institutional Research Ethics Committee (IRECs) had on a large-scale national multi-centre clinical audit called the National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes Study. Multi-centre research is commonplace in the health care system. However, IRECs continue to fail to differentiate between research and quality audit projects. The National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes study used an investigator-developed questionnaire concerning a clinical audit for heart failure programmes throughout Australia. Ethical guidelines developed by the National governing body of health and medical research in Australia classified the National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes Study as a low risk clinical audit not requiring ethical approval by IREC. Fifteen of 27 IRECs stipulated that the research proposal undergo full ethical review. None of the IRECs acknowledged: national quality assurance guidelines and recommendations nor ethics approval from other IRECs. Twelve of the 15 IRECs used different ethics application forms. Variability in the type of amendments was prolific. Lack of uniformity in ethical review processes resulted in a six- to eight-month delay in commencing the national study. Development of a national ethics application form with full ethical review by the first IREC and compulsory expedited review by subsequent IRECs would resolve issues raised in this paper. IRECs must change their ethics approval processes to one that enhances facilitation of multi-centre research which is now normative process for health services. The findings of this study highlight inconsistent ethical requirements between different IRECs. Also highlighted are the obstacles and delays that IRECs create when undertaking multi-centre clinical audits
Czech Academy of Sciences Publication Activity Database
Georgiev, K.; Kosturski, N.; Margenov, S.; Starý, Jiří
2009-01-01
Roč. 226, č. 2 (2009), s. 268-274 ISSN 0377-0427 Institutional research plan: CEZ:AV0Z30860518 Keywords : Vacuum freeze drying * Zeolites * Heat and mass transfer * Finite element method * MIC(0) preconditioning Subject RIV: BA - General Mathematics Impact factor: 1.292, year: 2009 http://apps.isiknowledge.com
THE WIGNER–FOKKER–PLANCK EQUATION: STATIONARY STATES AND LARGE TIME BEHAVIOR
ARNOLD, ANTON
2012-11-01
We consider the linear WignerFokkerPlanck equation subject to confining potentials which are smooth perturbations of the harmonic oscillator potential. For a certain class of perturbations we prove that the equation admits a unique stationary solution in a weighted Sobolev space. A key ingredient of the proof is a new result on the existence of spectral gaps for FokkerPlanck type operators in certain weighted L 2-spaces. In addition we show that the steady state corresponds to a positive density matrix operator with unit trace and that the solutions of the time-dependent problem converge towards the steady state with an exponential rate. © 2012 World Scientific Publishing Company.
Step out - Step in Sequencing Games
Musegaas, M.; Borm, P.E.M.; Quant, M.
2014-01-01
In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.
Step out-step in sequencing games
Musegaas, Marieke; Borm, Peter; Quant, Marieke
2015-01-01
In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,
Cyranka, Jacek; Mucha, Piotr B.; Titi, Edriss S.; Zgliczyński, Piotr
2018-04-01
The paper studies the issue of stability of solutions to the forced Navier-Stokes and damped Euler systems in periodic boxes. It is shown that for large, but fixed, Grashoff (Reynolds) number the turbulent behavior of all Leray-Hopf weak solutions of the three-dimensional Navier-Stokes equations, in periodic box, is suppressed, when viewed in the right frame of reference, by large enough average flow of the initial data; a phenomenon that is similar in spirit to the Landau damping. Specifically, we consider an initial data which have large enough spatial average, then by means of the Galilean transformation, and thanks to the periodic boundary conditions, the large time independent forcing term changes into a highly oscillatory force; which then allows us to employ some averaging principles to establish our result. Moreover, we also show that under the action of fast oscillatory-in-time external forces all two-dimensional regular solutions of the Navier-Stokes and the damped Euler equations converge to a unique time-periodic solution.
Time-Efficient High-Resolution Large-Area Nano-Patterning of Silicon Dioxide
Directory of Open Access Journals (Sweden)
Li Lin
2017-01-01
Full Text Available A nano-patterning approach on silicon dioxide (SiO2 material, which could be used for the selective growth of III-V nanowires in photovoltaic applications, is demonstrated. In this process, a silicon (Si stamp with nanopillar structures was first fabricated using electron-beam lithography (EBL followed by a dry etching process. Afterwards, the Si stamp was employed in nanoimprint lithography (NIL assisted with a dry etching process to produce nanoholes on the SiO2 layer. The demonstrated approach has advantages such as a high resolution in nanoscale by EBL and good reproducibility by NIL. In addition, high time efficiency can be realized by one-spot electron-beam exposure in the EBL process combined with NIL for mass production. Furthermore, the one-spot exposure enables the scalability of the nanostructures for different application requirements by tuning only the exposure dose. The size variation of the nanostructures resulting from exposure parameters in EBL, the pattern transfer during nanoimprint in NIL, and subsequent etching processes of SiO2 were also studied quantitatively. By this method, a hexagonal arranged hole array in SiO2 with a hole diameter ranging from 45 to 75 nm and a pitch of 600 nm was demonstrated on a four-inch wafer.
Energy beyond food: foraging theory informs time spent in thermals by a large soaring bird.
Directory of Open Access Journals (Sweden)
Emily L C Shepard
Full Text Available Current understanding of how animals search for and exploit food resources is based on microeconomic models. Although widely used to examine feeding, such constructs should inform other energy-harvesting situations where theoretical assumptions are met. In fact, some animals extract non-food forms of energy from the environment, such as birds that soar in updraughts. This study examined whether the gains in potential energy (altitude followed efficiency-maximising predictions in the world's heaviest soaring bird, the Andean condor (Vultur gryphus. Animal-attached technology was used to record condor flight paths in three-dimensions. Tracks showed that time spent in patchy thermals was broadly consistent with a strategy to maximise the rate of potential energy gain. However, the rate of climb just prior to leaving a thermal increased with thermal strength and exit altitude. This suggests higher rates of energetic gain may not be advantageous where the resulting gain in altitude would lead to a reduction in the ability to search the ground for food. Consequently, soaring behaviour appeared to be modulated by the need to reconcile differing potential energy and food energy distributions. We suggest that foraging constructs may provide insight into the exploitation of non-food energy forms, and that non-food energy distributions may be more important in informing patterns of movement and residency over a range of scales than previously considered.
Tandem mirror next step: remote maintenance
International Nuclear Information System (INIS)
Doggett, J.N.; Damm, C.C.; Hanson, C.L.
1980-01-01
This study of the next proposed experiment in the Mirror Fusion Program, the Tandem Mirror Next Step (TMNS), has included serious consideration of the maintenance requirements of such a large source of high energy neutrons with its attendant throughput of tritium. Although maintenance will be costly in time and money, our conclusion is that with careful attention to a design for maintenance plan such a device can be reliably operated
Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina
2015-01-01
Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.
Bernaards, Claire M; Hildebrandt, Vincent H; Hendriksen, Ingrid J M
2016-10-26
Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA). The aim of the study was to identify correlates of sedentary time (ST) in different age groups and day types (i.e. school-/work day versus non-school-/non-work day). The study sample consisted of 1895 Dutch children (4-11 years), 1131 adolescents (12-17 years), 8003 adults (18-64 years) and 1569 elderly (65 years and older) who enrolled in the Dutch continuous national survey 'Injuries and Physical Activity in the Netherlands' between 2006 and 2011. Respondents estimated the number of sitting hours during a regular school-/workday and a regular non-school/non-work day. Multiple linear regression analyses on cross-sectional data were used to identify correlates of ST. Significant positive associations with ST were observed for: higher age (4-to-17-year-olds and elderly), male gender (adults), overweight (children), higher education (adults ≥ 30 years), urban environment (adults), chronic disease (adults ≥ 30 years), sedentary work (adults), not meeting the moderate to vigorous PA (MVPA) guideline (children and adults ≥ 30 years) and not meeting the vigorous PA (VPA) guideline (4-to-17-year-olds). Correlates of ST that significantly differed between day types were working hours and meeting the VPA guideline. More working hours were associated with more ST on school-/work days. In children and adolescents, meeting the VPA guideline was associated with less ST on non-school/non-working days only. This study provides new insights in the correlates of ST in different age groups and thus possibilities for interventions in these groups. Correlates of ST appear to differ between age groups and to a lesser degree between day types. This implies that interventions to reduce ST should be age specific. Longitudinal studies are needed to draw conclusions on causality of the relationship between identified correlates and ST.
Directory of Open Access Journals (Sweden)
Claire M. Bernaards
2016-10-01
Full Text Available Abstract Background Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA. The aim of the study was to identify correlates of sedentary time (ST in different age groups and day types (i.e. school-/work day versus non-school-/non-work day. Methods The study sample consisted of 1895 Dutch children (4–11 years, 1131 adolescents (12–17 years, 8003 adults (18–64 years and 1569 elderly (65 years and older who enrolled in the Dutch continuous national survey ‘Injuries and Physical Activity in the Netherlands’ between 2006 and 2011. Respondents estimated the number of sitting hours during a regular school-/workday and a regular non-school/non-work day. Multiple linear regression analyses on cross-sectional data were used to identify correlates of ST. Results Significant positive associations with ST were observed for: higher age (4-to-17-year-olds and elderly, male gender (adults, overweight (children, higher education (adults ≥ 30 years, urban environment (adults, chronic disease (adults ≥ 30 years, sedentary work (adults, not meeting the moderate to vigorous PA (MVPA guideline (children and adults ≥ 30 years and not meeting the vigorous PA (VPA guideline (4-to-17-year-olds. Correlates of ST that significantly differed between day types were working hours and meeting the VPA guideline. More working hours were associated with more ST on school-/work days. In children and adolescents, meeting the VPA guideline was associated with less ST on non-school/non-working days only. Conclusions This study provides new insights in the correlates of ST in different age groups and thus possibilities for interventions in these groups. Correlates of ST appear to differ between age groups and to a lesser degree between day types. This implies that interventions to reduce ST should be age specific. Longitudinal studies are needed to draw conclusions on causality of
Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari
2017-10-01
We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.
International Nuclear Information System (INIS)
Althaus, R.F.; Kirsten, F.A.; Lee, K.L.; Olson, S.R.; Wagner, L.J.; Wolverton, J.M.
1976-10-01
A large-scale digitizer (LSD) system for acquiring charge and time-of-arrival particle data from high-energy-physics experiments has been developed at the Lawrence Berkeley Laboratory. The objective in this development was to significantly reduce the cost of instrumenting large-detector arrays which, for the 4π-geometry of colliding-beam experiments, are proposed with an order of magnitude increase in channel count over previous detectors. In order to achieve the desired economy (approximately $65 per channel), a system was designed in which a number of control signals for conversion, for digitization, and for readout are shared in common by all the channels in each 128-channel bin. The overall-system concept and the distribution of control signals that are critical to the 10-bit charge resolution and to the 12-bit time resolution are described. Also described is the bit-serial transfer scheme, chosen for its low component and cabling costs
International Nuclear Information System (INIS)
Sabati, M; Lauzon, M L; Frayne, R
2003-01-01
Data acquisition using a continuously moving table approach is a method capable of generating large field-of-view (FOV) 3D MR angiograms. However, in order to obtain venous contamination-free contrast-enhanced (CE) MR angiograms in the lower limbs, one of the major challenges is to acquire all necessary k-space data during the restricted arterial phase of the contrast agent. Preliminary investigation on the space-time relationship of continuously acquired peripheral angiography is performed in this work. Deterministic and stochastic undersampled hybrid-space (x, k y , k z ) acquisitions are simulated for large FOV peripheral runoff studies. Initial results show the possibility of acquiring isotropic large FOV images of the entire peripheral vascular system. An optimal trade-off between the spatial and temporal sampling properties was found that produced a high-spatial resolution peripheral CE-MR angiogram. The deterministic sampling pattern was capable of reconstructing the global structure of the peripheral arterial tree and showed slightly better global quantitative results than stochastic patterns. Optimal stochastic sampling patterns, on the other hand, enhanced small vessels and had more favourable local quantitative results. These simulations demonstrate the complex spatial-temporal relationship when sampling large FOV peripheral runoff studies. They also suggest that more investigation is required to maximize image quality as a function of hybrid-space coverage, acquisition repetition time and sampling pattern parameters
Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen
2017-03-01
Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.
Global low-energy weak solution and large-time behavior for the compressible flow of liquid crystals
Wu, Guochun; Tan, Zhong
2018-06-01
In this paper, we consider the weak solution of the simplified Ericksen-Leslie system modeling compressible nematic liquid crystal flows in R3. When the initial data are of small energy and initial density is positive and essentially bounded, we prove the existence of a global weak solution in R3. The large-time behavior of a global weak solution is also established.
International Nuclear Information System (INIS)
Goyot, M.
1975-05-01
A broadband and low noise charge preamplifier was developed in hybrid form, for a recoil spectrometer requiring large capacitance semiconductor detectors. This new hybrid and low cost preamplifier permits good timing information without compromising energy resolution. With a 500 pF external input capacity, it provides two simultaneous outputs: (i) the faster, current sensitive, with a rise time of 9 nsec and 2 mV/MeV on 50 ohms load, (ii) the lower, charge sensitive, with an energy resolution of 14 keV (FWHM Si) using a RC-CR ungated filter of 2 μsec and a FET input protection [fr
International Nuclear Information System (INIS)
Nasserzadeh, V.; Swithenbank, J.; Jones, B.
1995-01-01
The problem of measuring gas-residence time in large incinerators was studied by the pseudo-random binary sequence (PRBS) stimulus tracer response technique at the Sheffield municipal solid-waste incinerator (35 MW plant). The steady-state system was disturbed by the superimposition of small fluctuations in the form of a pseudo-random binary sequence of methane pulses, and the response of the incinerator was determined from the CO 2 concentration in flue gases at the boiler exit, measured with a specially developed optical gas analyser with a high-frequency response. For data acquisition, an on-line PC computer was used together with the LAB Windows software system; the output response was then cross-correlated with the perturbation signal to give the impulse response of the incinerator. There was very good agreement between the gas-residence time for the Sheffield MSW incinerator as calculated by computational fluid dynamics (FLUENT Model) and gas-residence time at the plant as measured by the PRBS tracer technique. The results obtained from this research programme clearly demonstrate that the PRBS stimulus tracer response technique can be successfully and economically used to measure gas-residence times in large incinerator plants. It also suggests that the common commercial practice of characterising the incinerator operation by a single-residence-time parameter may lead to a misrepresentation of the complexities involved in describing the operation of the incineration system. (author)
Control Software for Piezo Stepping Actuators
Shields, Joel F.
2013-01-01
A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.
On the Convexity of Step out - Step in Sequencing Games
Musegaas, Marieke; Borm, Peter; Quant, Marieke
2016-01-01
The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an
Directory of Open Access Journals (Sweden)
Yunliang Li
2015-04-01
Full Text Available Most biochemical processes and associated water quality in lakes depends on their flushing abilities. The main objective of this study was to investigate the transport time scale in a large floodplain lake, Poyang Lake (China. A 2D hydrodynamic model (MIKE 21 was combined with dye tracer simulations to determine residence and travel times of the lake for various water level variation periods. The results indicate that Poyang Lake exhibits strong but spatially heterogeneous residence times that vary with its highly seasonal water level dynamics. Generally, the average residence times are less than 10 days along the lake’s main flow channels due to the prevailing northward flow pattern; whereas approximately 30 days were estimated during high water level conditions in the summer. The local topographically controlled flow patterns substantially increase the residence time in some bays with high spatial values of six months to one year during all water level variation periods. Depending on changes in the water level regime, the travel times from the pollution sources to the lake outlet during the high and falling water level periods (up to 32 days are four times greater than those under the rising and low water level periods (approximately seven days.
International Nuclear Information System (INIS)
Wetstein, Matthew
2011-01-01
Microchannel plate photomultiplier tubes (MCPs) are compact, imaging detectors, capable of micron-level spatial imaging and timing measurements with resolutions below 10 ps. Conventional fabrication methods are too expensive for making MCPs in the quantities and sizes necessary for typical HEP applications, such as time-of-flight ring-imaging Cherenkov detectors (TOF-RICH) or water Cherenkov-based neutrino experiments. The Large Area Picosecond Photodetector Collaboration (LAPPD) is developing new, commercializable methods to fabricate 20 cm 2 thin planar MCPs at costs comparable to those of traditional photo-multiplier tubes. Transmission-line readout with waveform sampling on both ends of each line allows the efficient coverage of large areas while maintaining excellent time and space resolution. Rather than fabricating channel plates from active, high secondary electron emission materials, we produce plates from passive substrates, and coat them using atomic layer deposition (ALD), a well established industrial batch process. In addition to possible reductions in cost and conditioning time, this allows greater control to optimize the composition of active materials for performance. We present details of the MCP fabrication method, preliminary results from testing and characterization facilities, and possible HEP applications.
Miao, Y J; Xiong, G T; Bai, M Y; Ge, Y; Wu, Z F
2018-05-01
Fresh-cut produce is at greater risk of Salmonella contamination. Detection and early warning systems play an important role in reducing the dissemination of contaminated products. One-step Reverse Transcription Polymerase Chain Reaction (RT-qPCR) targeting Salmonella tmRNA with or without a 6-h enrichment was evaluated for the detection of Salmonella in fresh-cut vegetables after 6-h storage. LOD of one-step RT-qPCR was 1·0 CFU per ml (about 100 copies tmRNA per ml) by assessed 10-fold serially diluted RNA from 10 6 CFU per ml bacteria culture. Then, one-step RT-qPCR assay was applied to detect viable Salmonella cells in 14 fresh-cut vegetables after 6-h storage. Without enrichment, this assay could detect 10 CFU per g for fresh-cut lettuce, cilantro, spinach, cabbage, Chinese cabbage and bell pepper, and 10 2 CFU per g for other vegetables. With a 6-h enrichment, this assay could detect 10 CFU per g for all fresh-cut vegetables used in this study. Moreover, this assay was able to discriminate viable cells from dead cells. This rapid detection assay may provide potential processing control and early warning method in fresh-cut vegetable processing to strengthen food safety assurance. Significance and Impact of the Study: Fresh-cut produce is at greater risk of Salmonella contamination. Rapid detection methods play an important role in reducing the dissemination of contaminated products. One-step RT-qPCR assay used in this study could detect 10 CFU per g Salmonella for 14 fresh-cut vegetables with a 6-h short enrichment. Moreover, this assay was able to discriminate viable cells from dead cells. This rapid detection assay may provide potential processing control and early warning method in fresh-cut vegetable processing to strengthen food safety assurance. © 2018 The Society for Applied Microbiology.
Directory of Open Access Journals (Sweden)
Andreia Leite
2017-04-01
This work shows that most diagnoses examined were recorded with a delay of ≤30 days, making NRTVSS possible. The distribution of the delays was condition-specific and the weekly delay distribution could be used to adjust for delays in the NRTVSS analysis. CPRD can be a viable data source to use in this kind of analysis; next steps will include trial implementation of the system using these data.
Directory of Open Access Journals (Sweden)
Y. Kawada
2007-01-01
Full Text Available Prior to large earthquakes (e.g. 1995 Kobe earthquake, Japan, an increase in the atmospheric radon concentration is observed, and this increase in the rate follows a power-law of the time-to-earthquake (time-to-failure. This phenomenon corresponds to the increase in the radon migration in crust and the exhalation into atmosphere. An irreversible thermodynamic model including time-scale invariance clarifies that the increases in the pressure of the advecting radon and permeability (hydraulic conductivity in the crustal rocks are caused by the temporal changes in the power-law of the crustal strain (or cumulative Benioff strain, which is associated with damage evolution such as microcracking or changing porosity. As the result, the radon flux and the atmospheric radon concentration can show a temporal power-law increase. The concentration of atmospheric radon can be used as a proxy for the seismic precursory processes associated with crustal dynamics.
DEFF Research Database (Denmark)
Rossi, Matteo; Olsson, Per-Ivar; Johansson, Sara
2017-01-01
-current resistivity distribution of the subsoil and the phase of the complex conductivity using a constant-phase angle model. The joint interpretation of electrical resistivity and induced-polarization models leads to a better understanding of complex three-dimensional subsoil geometries. The results have been......An investigation of geological conditions is always a key point for planning infrastructure constructions. Bedrock surface and rock quality must be estimated carefully in the designing process of infrastructures. A large direct-current resistivity and time-domain induced-polarization survey has......, there are northwest-trending Permian dolerite dykes that are less deformed. Four 2D direct-current resistivity and time-domain induced-polarization profiles of about 1-km length have been carefully pre-processed to retrieve time-domain induced polarization responses and inverted to obtain the direct...
Yin, Stuart (Shizhuo); Chao, Ju-Hung; Zhu, Wenbin; Chen, Chang-Jiang; Campbell, Adrian; Henry, Michael; Dubinskiy, Mark; Hoffman, Robert C.
2017-08-01
In this paper, we present a novel large capacity (a 1000+ channel) time division multiplexing (TDM) laser beam combining technique by harnessing a state-of-the-art nanosecond speed potassium tantalate niobate (KTN) electro-optic (EO) beam deflector as the time division multiplexer. The major advantages of TDM approach are: (1) large multiplexing capability (over 1000 channels), (2) high spatial beam quality (the combined beam has the same spatial profile as the individual beam), (3) high spectral beam quality (the combined beam has the same spectral width as the individual beam, and (4) insensitive to the phase fluctuation of individual laser because of the nature of the incoherent beam combining. The quantitative analyses show that it is possible to achieve over one hundred kW average power, single aperture, single transverse mode solid state and/or fiber laser by pursuing this innovative beam combining method, which represents a major technical advance in the field of high energy lasers. Such kind of 100+ kW average power diffraction limited beam quality lasers can play an important role in a variety of applications such as laser directed energy weapons (DEW) and large-capacity high-speed laser manufacturing, including cutting, welding, and printing.
Focal cryotherapy: step by step technique description
Directory of Open Access Journals (Sweden)
Cristina Redondo
Full Text Available ABSTRACT Introduction and objective: Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa. The purpose of this video is to describe the procedure step by step. Materials and methods: We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. Results: The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipment utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40°C to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1–5. Conclusions: Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment.
Prochazka, Ivan; Kodet, Jan; Eckl, Johann; Blazej, Josef
2017-10-01
We are reporting on the design, construction, and performance of a photon counting detector system, which is based on single photon avalanche diode detector technology. This photon counting device has been optimized for very high timing resolution and stability of its detection delay. The foreseen application of this detector is laser ranging of space objects, laser time transfer ground to space and fundamental metrology. The single photon avalanche diode structure, manufactured on silicon using K14 technology, is used as a sensor. The active area of the sensor is circular with 200 μm diameter. Its photon detection probability exceeds 40% in the wavelength range spanning from 500 to 800 nm. The sensor is operated in active quenching and gating mode. A new control circuit was optimized to maintain high timing resolution and detection delay stability. In connection to this circuit, timing resolution of the detector is reaching 20 ps FWHM. In addition, the temperature change of the detection delay is as low as 70 fs/K. As a result, the detection delay stability of the device is exceptional: expressed in the form of time deviation, detection delay stability of better than 60 fs has been achieved. Considering the large active area aperture of the detector, this is, to our knowledge, the best timing performance reported for a solid state photon counting detector so far.
International Nuclear Information System (INIS)
Bhattacharya, Deb Sankar; Majumdar, Nayana; Sarkar, S.; Bhattacharya, S.; Mukhopadhyay, Supratik; Bhattacharya, P.; Attie, D.; Colas, P.; Ganjour, S.; Bhattacharya, Aparajita
2016-01-01
The principal particle tracker at the International Linear Collider (ILC) is planned to be a large Time Projection Chamber (TPC) where different Micro Pattern Gaseous Detector (MPGDs) candidate as the gaseous amplifier. A Micromegas (MM) based TPC can meet the ILC requirement of continuous and precise pattern recognition. Seven MM modules, working as the end-plate of a Large Prototype TPC (LPTPC) installed at DESY, have been tested with a 5 GeV electron beam. Due to the grounded peripheral frame of the MM modules, at low drift, the electric field lines near the detector edge remain no longer parallel to the TPC axis. This causes signal loss along the boundaries of the MM modules as well as distortion in the reconstructed track. In presence of magnetic field, the distorted electric field introduces ExB effect
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.
2017-12-01
A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would
Getty, Stephanie A.; Brinckerhoff, William B.; Li, Xiang; Elsila, Jamie; Cornish, Timothy; Ecelberger, Scott; Wu, Qinghao; Zare, Richard
2014-01-01
Two-step laser desorption mass spectrometry is a well suited technique to the analysis of high priority classes of organics, such as polycyclic aromatic hydrocarbons, present in complex samples. The use of decoupled desorption and ionization laser pulses allows for sensitive and selective detection of structurally intact organic species. We have recently demonstrated the implementation of this advancement in laser mass spectrometry in a compact, flight-compatible instrument that could feasibly be the centerpiece of an analytical science payload as part of a future spaceflight mission to a small body or icy moon.
International Nuclear Information System (INIS)
Rivarolo, M.; Magistri, L.; Massardo, A.F.
2014-01-01
Highlights: • We investigate H 2 and CH 4 production from very large hydraulic plant (14 GW). • We employ only “spilled energy”, not used by hydraulic plant, for H 2 production. • We consider the integration with energy taken from the grid at different prices. • We consider hydrogen conversion in chemical reactors to produce methane. • We find plants optimal size using a time-dependent thermo-economic approach. - Abstract: This paper investigates hydrogen and methane generation from large hydraulic plant, using an original multilevel thermo-economic optimization approach developed by the authors. Hydrogen is produced by water electrolysis employing time-dependent hydraulic energy related to the water which is not normally used by the plant, known as “spilled water electricity”. Both the demand for spilled energy and the electrical grid load vary widely by time of year, therefore a time-dependent hour-by-hour one complete year analysis has been carried out, in order to define the optimal plant size. This time period analysis is necessary to take into account spilled energy and electrical load profiles variability during the year. The hydrogen generation plant is based on 1 MWe water electrolysers fuelled with the “spilled water electricity”, when available; in the remaining periods, in order to assure a regular H 2 production, the energy is taken from the electrical grid, at higher cost. To perform the production plant size optimization, two hierarchical levels have been considered over a one year time period, in order to minimize capital and variable costs. After the optimization of the hydrogen production plant size, a further analysis is carried out, with a view to converting the produced H 2 into methane in a chemical reactor, starting from H 2 and CO 2 which is obtained with CCS plants and/or carried by ships. For this plant, the optimal electrolysers and chemical reactors system size is defined. For both of the two solutions, thermo
A large set of potential past, present and future hydro-meteorological time series for the UK
Directory of Open Access Journals (Sweden)
B. P. Guillod
2018-01-01
Full Text Available Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM driven by observed or projected sea surface temperature (SST and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM. Sets of 100 time series are generated for each of (i a historical baseline (1900–2006, (ii five near-future scenarios (2020–2049 and (iii five far-future scenarios (2070–2099. The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5 and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5 models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months and shorter-duration high precipitation (1–30 days, the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09 but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and
Roosen, David; Wegewijs, Maarten R.; Hofstetter, Walter
2008-02-01
We investigate the time-dependent Kondo effect in a single-molecule magnet (SMM) strongly coupled to metallic electrodes. Describing the SMM by a Kondo model with large spin S>1/2, we analyze the underscreening of the local moment and the effect of anisotropy terms on the relaxation dynamics of the magnetization. Underscreening by single-channel Kondo processes leads to a logarithmically slow relaxation, while finite uniaxial anisotropy causes a saturation of the SMM’s magnetization. Additional transverse anisotropy terms induce quantum spin tunneling and a pseudospin-1/2 Kondo effect sensitive to the spin parity.
A large set of potential past, present and future hydro-meteorological time series for the UK
Guillod, Benoit P.; Jones, Richard G.; Dadson, Simon J.; Coxon, Gemma; Bussi, Gianbattista; Freer, James; Kay, Alison L.; Massey, Neil R.; Sparrow, Sarah N.; Wallom, David C. H.; Allen, Myles R.; Hall, Jim W.
2018-01-01
Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM) driven by observed or projected sea surface temperature (SST) and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM). Sets of 100 time series are generated for each of (i) a historical baseline (1900-2006), (ii) five near-future scenarios (2020-2049) and (iii) five far-future scenarios (2070-2099). The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5) and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5) models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months) and shorter-duration high precipitation (1-30 days), the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09) but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and intensity in most regions
Hong, Kyeongsoo; Koo, Jae-Rim; Lee, Jae Woo; Kim, Seung-Lee; Lee, Chung-Uk; Park, Jang-Ho; Kim, Hyoun-Woo; Lee, Dong-Joo; Kim, Dong-Jin; Han, Cheongho
2018-05-01
We report the results of photometric observations for doubly eclipsing binaries OGLE-LMC-ECL-15674 and OGLE-LMC-ECL-22159, both of which are composed of two pairs (designated A&B) of a detached eclipsing binary located in the Large Magellanic Cloud. The light curves were obtained by high-cadence time-series photometry using the Korea Microlensing Telescope Network 1.6 m telescopes located at three southern sites (CTIO, SAAO, and SSO) between 2016 September and 2017 January. The orbital periods were determined to be 1.433 and 1.387 days for components A and B of OGLE-LMC-ECL-15674, respectively, and 2.988 and 3.408 days for OGLE-LMC-ECL-22159A and B, respectively. Our light curve solutions indicate that the significant changes in the eclipse depths of OGLE-LMC-ECL-15674A and B were caused by variations in their inclination angles. The eclipse timing diagrams of the A and B components of OGLE-LMC-ECL-15674 and OGLE-LMC-ECL-22159 were analyzed using 28, 44, 28, and 26 new times of minimum light, respectively. The apsidal motion period of OGLE-LMC-ECL-15674B was estimated by detailed analysis of eclipse timings for the first time. The detached eclipsing binary OGLE-LMC-ECL-15674B shows a fast apsidal period of 21.5 ± 0.1 years.
Energy Technology Data Exchange (ETDEWEB)
Lawrenz, M.
2007-10-30
In the present work the dynamics of CO-molecules on a stepped Pt(111)-surface induced by fs-laser pulses at low temperatures was studied by using laser spectroscopy. In the first part of the work, the laser-induced diffusion for the CO/Pt(111)-system could be demonstrated and modelled successfully for step diffusion. At first, the diffusion of CO-molecules from the step sites to the terrace sites on the surface was traced. The experimentally discovered energy transfer time of 500 fs for this process confirms the assumption of an electronically induced process. In the following it was explained how the experimental results were modelled. A friction coefficient which depends on the electron temperature yields a consistent model, whereas for the understanding of the fluence dependence and time-resolved measurements parallel the same set of parameters was used. Furthermore, the analysis was extended to the CO-terrace diffusion. Small coverages of CO were adsorbed to the terraces and the diffusion was detected as the temporal evolution of the occupation of the step sites acting as traps for the diffusing molecules. The additional performed two-pulse correlation measurements also indicate an electronically induced process. At the substrate temperature of 40 K the cross-correlation - where an energy transfer time of 1.8 ps was extracted - suggests also an electronically induced energy transfer mechanism. Diffusion experiments were performed for different substrate temperatures. (orig.)
Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid
2017-08-01
Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].
Application of large area SiPMs for the readout of a plastic scintillator based timing detector
Betancourt, C.; Blondel, A.; Brundler, R.; Dätwyler, A.; Favre, Y.; Gascon, D.; Gomez, S.; Korzenev, A.; Mermod, P.; Noah, E.; Serra, N.; Sgalaberna, D.; Storaci, B.
2017-11-01
In this study an array of eight 6 mm × 6 mm area SiPMs was coupled to the end of a long plastic scintillator counter which was exposed to a 2.5 GeV/c muon beam at the CERN PS. Timing characteristics of bars with dimensions 150 cm × 6 cm × 1 cm and 120 cm × 11 cm × 2.5 cm have been studied. An 8-channel SiPM anode readout ASIC (MUSIC R1) based on a novel low input impedance current conveyor has been used to read out and amplify SiPMs independently and sum the signals at the end. Prospects for applications in large-scale particle physics detectors with timing resolution below 100 ps are provided in light of the results.
Betancourt, C.; Brundler, R.; Dätwyler, A.; Favre, Y.; Gascon, D.; Gomez, S.; Korzenev, Alexander; Mermod, P.; Noah, E.; Serra, N.; Sgalaberna, D.; Storaci, B.
2017-11-27
In this study an array of eight 6 mm × 6 mm area SiPMs was coupled to the end of a long plastic scintillator counter which was exposed to a 2.5 GeV/c muon beam at the CERN PS. Timing characteristics of bars with dimensions 150 cm × 6 cm × 1 cm and 120 cm × 11 cm × 2.5 cm have been studied. An 8-channel SiPM anode readout ASIC (MUSIC R1) based on a novel low input impedance current conveyor has been used to read out and amplify SiPMs independently and sum the signals at the end. Prospects for applications in large-scale particle physics detectors with timing resolution below 100 ps are provided in light of the results.
Large Observatory for x-ray Timing (LOFT-P): a Probe-class mission concept study
Wilson-Hodge, Colleen A.; Ray, Paul S.; Chakrabarty, Deepto; Feroci, Marco; Alvarez, Laura; Baysinger, Michael; Becker, Chris; Bozzo, Enrico; Brandt, Soren; Carson, Billy; Chapman, Jack; Dominguez, Alexandra; Fabisinski, Leo; Gangl, Bert; Garcia, Jay; Griffith, Christopher; Hernanz, Margarita; Hickman, Robert; Hopkins, Randall; Hui, Michelle; Ingram, Luster; Jenke, Peter; Korpela, Seppo; Maccarone, Tom; Michalska, Malgorzata; Pohl, Martin; Santangelo, Andrea; Schanne, Stephane; Schnell, Andrew; Stella, Luigi; van der Klis, Michiel; Watts, Anna; Winter, Berend; Zane, Silvia
2016-07-01
LOFT-P is a mission concept for a NASA Astrophysics Probe-Class (matter? What are the effects of strong gravity on matter spiraling into black holes? It would be optimized for sub-millisecond timing of bright Galactic X-ray sources including X-ray bursters, black hole binaries, and magnetars to study phenomena at the natural timescales of neutron star surfaces and black hole event horizons and to measure mass and spin of black holes. These measurements are synergistic to imaging and high-resolution spectroscopy instruments, addressing much smaller distance scales than are possible without very long baseline X-ray interferometry, and using complementary techniques to address the geometry and dynamics of emission regions. LOFT-P would have an effective area of >6 m2, > 10x that of the highly successful Rossi X-ray Timing Explorer (RXTE). A sky monitor (2-50 keV) acts as a trigger for pointed observations, providing high duty cycle, high time resolution monitoring of the X-ray sky with 20 times the sensitivity of the RXTE All-Sky Monitor, enabling multi-wavelength and multimessenger studies. A probe-class mission concept would employ lightweight collimator technology and large-area solid-state detectors, segmented into pixels or strips, technologies which have been recently greatly advanced during the ESA M3 Phase A study of LOFT. Given the large community interested in LOFT (>800 supporters*, the scientific productivity of this mission is expected to be very high, similar to or greater than RXTE ( 2000 refereed publications). We describe the results of a study, recently completed by the MSFC Advanced Concepts Office, that demonstrates that such a mission is feasible within a NASA probe-class mission budget.
Habarulema, John Bosco; Yizengaw, Endawoke; Katamzi-Joseph, Zama T.; Moldwin, Mark B.; Buchert, Stephan
2018-01-01
This paper discusses the ionosphere's response to the largest storm of solar cycle 24 during 16-18 March 2015. We have used the Global Navigation Satellite Systems (GNSS) total electron content data to study large-scale traveling ionospheric disturbances (TIDs) over the American, African, and Asian regions. Equatorward large-scale TIDs propagated and crossed the equator to the other side of the hemisphere especially over the American and Asian sectors. Poleward TIDs with velocities in the range ≈400-700 m/s have been observed during local daytime over the American and African sectors with origin from around the geomagnetic equator. Our investigation over the American sector shows that poleward TIDs may have been launched by increased Lorentz coupling as a result of penetrating electric field during the southward turning of the interplanetary magnetic field, Bz. We have observed increase in SWARM satellite electron density (Ne) at the same time when equatorward large-scale TIDs are visible over the European-African sector. The altitude Ne profiles from ionosonde observations show a possible link that storm-induced TIDs may have influenced the plasma distribution in the topside ionosphere at SWARM satellite altitude.
Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.
2016-12-01
A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.
Directory of Open Access Journals (Sweden)
Runchun Mark Wang
2015-05-01
Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.
Large deviations of the finite-time magnetization of the Curie-Weiss random-field Ising model
Paga, Pierre; Kühn, Reimer
2017-08-01
We study the large deviations of the magnetization at some finite time in the Curie-Weiss random field Ising model with parallel updating. While relaxation dynamics in an infinite-time horizon gives rise to unique dynamical trajectories [specified by initial conditions and governed by first-order dynamics of the form mt +1=f (mt) ] , we observe that the introduction of a finite-time horizon and the specification of terminal conditions can generate a host of metastable solutions obeying second-order dynamics. We show that these solutions are governed by a Newtonian-like dynamics in discrete time which permits solutions in terms of both the first-order relaxation ("forward") dynamics and the backward dynamics mt +1=f-1(mt) . Our approach allows us to classify trajectories for a given final magnetization as stable or metastable according to the value of the rate function associated with them. We find that in analogy to the Freidlin-Wentzell description of the stochastic dynamics of escape from metastable states, the dominant trajectories may switch between the two types (forward and backward) of first-order dynamics. Additionally, we show how to compute rate functions when uncertainty in the quenched disorder is introduced.
International Nuclear Information System (INIS)
Parker, Leonard; Vanzella, Daniel A.T.
2004-01-01
We investigate the possibility that the late acceleration observed in the rate of expansion of the Universe is due to vacuum quantum effects arising in curved spacetime. The theoretical basis of the vacuum cold dark matter (VCDM), or vacuum metamorphosis, cosmological model of Parker and Raval is reexamined and improved. We show, by means of a manifestly nonperturbative approach, how the infrared behavior of the propagator (related to the large-time asymptotic form of the heat kernel) of a free scalar field in curved spacetime leads to nonperturbative terms in the effective action similar to those appearing in the earlier version of the VCDM model. The asymptotic form that we adopt for the propagator or heat kernel at large proper time s is motivated by, and consistent with, particular cases where the heat kernel has been calculated exactly, namely in de Sitter spacetime, in the Einstein static universe, and in the linearly expanding spatially flat Friedmann-Robertson-Walker (FRW) universe. This large-s asymptotic form generalizes somewhat the one suggested by the Gaussian approximation and the R-summed form of the propagator that earlier served as a theoretical basis for the VCDM model. The vacuum expectation value for the energy-momentum tensor of the free scalar field, obtained through variation of the effective action, exhibits a resonance effect when the scalar curvature R of the spacetime reaches a particular value related to the mass of the field. Modeling our Universe by an FRW spacetime filled with classical matter and radiation, we show that the back reaction caused by this resonance drives the Universe through a transition to an accelerating expansion phase, very much in the same way as originally proposed by Parker and Raval. Our analysis includes higher derivatives that were neglected in the earlier analysis, and takes into account the possible runaway solutions that can follow from these higher-derivative terms. We find that the runaway solutions do
Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A
2013-01-01
Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED
International Nuclear Information System (INIS)
Syed, E.V.; Salaita, G.N.; McCaffery, F.G.
1991-01-01
Cased hole logging with pulsed neutron tools finds extensive use for identifying zones of water breakthrough and monitoring oil-water contacts in oil reservoirs being depleted by waterflooding or natural water drive. Results of such surveys then find direct use for planning recompletions and water shutoff treatments. Pulsed neutron capture (PNC) logs are useful for estimating water saturation changes behind casing in the presence of a constant, high-salinity environment. PNC log surveys run at different times, i.e., in a time-lapse mode, are particularly amenable to quantitative analysis. The combined use of the original open hole and PNC time-lapse log information can then provide information on remaining or residual oil saturations in a reservoir. This paper reports analyses of historical pulsed neutron capture log data to assess residual oil saturation in naturally water-swept zones for selected wells from a large sandstone reservoir in the Middle East. Quantitative determination of oil saturations was aided by PNC log information obtained from a series of tests conducted in a new well in the same field
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Herda, Maxime; Rodrigues, L. Miguel
2018-03-01
The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted L^2 space, and where dependencies on the mean-free path τ and the Debye length δ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions τ → ∞ to the strongly collisional regime τ → 0. As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the τ -dependent constraint on δ ensuring exponential decay with explicit τ -dependent rates towards the stationary solution. In the strongly collisional limit τ → 0, we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a L^2 space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates.
Directory of Open Access Journals (Sweden)
Gray G.T.
2012-08-01
Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.
International Nuclear Information System (INIS)
Frullani, Salvatore; Castelluccio, Donato M.; Cisbani, Evaristo; Colilli, Stefano; Fratoni, Rolando; Giuliani, Fausto; Mostarda, Angelo; Colangeli, Giorgio; De Otto, Gian L.; Marchiori, Carlo; Paoloni, Gianfranco
2008-01-01
Aerial platform equipped with a sampling line and real-time monitoring of sampled aerosol is presented. The system is composed by: a) A Sky Arrow 650 fixed wing aircraft with the front part of the fuselage properly adapted to house the detection and acquisition equipment; b) A compact air sampling line where the iso kinetic sampling is dynamically maintained, aerosol is collected on a filter positioned along the line and hosted on a rotating 4-filters disk; c) A detection subsystem: a small BGO scintillator and Geiger counter right behind the sampling filter, a HPGe detector allows radionuclide identification in the collected aerosol samples, a large NaI(Tl) crystal detects airborne and ground gamma radiation; d) Several environmental (temperature, pressure, aircraft/wind speed) sensors and a GPS receiver that support the full characterization of the sampling conditions and the temporal and geographical location of the acquired data; e) Acquisition and control system based on compact electronics and real time software that operate the sampling line actuators, guarantee the dynamical iso kinetic condition, and acquire the detectors and sensor data. With this system quantitative measurements can be available also during the plume phase of an accident, while other aerial platforms, without sampling capability, can only be used for qualitative assessments. Transmission of all data will be soon implemented in order to make all the data available in real-time to the Technical Centre for the Emergency Management. The use of an unmanned air-vehicle (UAV) is discussed as future option. (author)
Internship guide : Work placements step by step
Haag, Esther
2013-01-01
Internship Guide: Work Placements Step by Step has been written from the practical perspective of a placement coordinator. This book addresses the following questions : what problems do students encounter when they start thinking about the jobs their degree programme prepares them for? How do you
The way to collisions, step by step
2009-01-01
While the LHC sectors cool down and reach the cryogenic operating temperature, spirits are warming up as we all eagerly await the first collisions. No reason to hurry, though. Making particles collide involves the complex manoeuvring of thousands of delicate components. The experts will make it happen using a step-by-step approach.
Large-scale heterogeneity of Amazonian phenology revealed from 26-year long AVHRR/NDVI time-series
International Nuclear Information System (INIS)
Silva, Fabrício B; Shimabukuro, Yosio E; Aragão, Luiz E O C; Anderson, Liana O; Pereira, Gabriel; Cardozo, Franciele; Arai, Egídio
2013-01-01
Depiction of phenological cycles in tropical forests is critical for an understanding of seasonal patterns in carbon and water fluxes as well as the responses of vegetation to climate variations. However, the detection of clear spatially explicit phenological patterns across Amazonia has proven difficult using data from the Moderate Resolution Imaging Spectroradiometer (MODIS). In this work, we propose an alternative approach based on a 26-year time-series of the normalized difference vegetation index (NDVI) from the Advanced Very High Resolution Radiometer (AVHRR) to identify regions with homogeneous phenological cycles in Amazonia. Specifically, we aim to use a pattern recognition technique, based on temporal signal processing concepts, to map Amazonian phenoregions and to compare the identified patterns with field-derived information. Our automated method recognized 26 phenoregions with unique intra-annual seasonality. This result highlights the fact that known vegetation types in Amazonia are not only structurally different but also phenologically distinct. Flushing of new leaves observed in the field is, in most cases, associated to a continuous increase in NDVI. The peak in leaf production is normally observed from the beginning to the middle of the wet season in 66% of the field sites analyzed. The phenoregion map presented in this work gives a new perspective on the dynamics of Amazonian canopies. It is clear that the phenology across Amazonia is more variable than previously detected using remote sensing data. An understanding of the implications of this spatial heterogeneity on the seasonality of Amazonian forest processes is a crucial step towards accurately quantifying the role of tropical forests within global biogeochemical cycles. (letter)
Furfaro, Lucy L; Chang, Barbara J; Payne, Matthew S
2017-09-01
Streptococcus agalactiae is the leading cause of early-onset neonatal sepsis. Culture-based screening methods lack the sensitivity of molecular assays and do not indicate serotype; a potentially important virulence marker. We aimed to develop a multiplex PCR to detect S. agalactiae while simultaneously identifying serotypes Ia, Ib, and III; commonly associated with infant disease. Primers were designed to target S. agalactiae serotype-specific cps genes and the dltS gene. The assay was validated with 512 vaginal specimens from pregnant women. 112 (21.9%) were dltS positive, with 14.3%, 0.9%, and 6.3% of these identified as cps Ia, Ib, and III, respectively. Our assay is a specific and sensitive method to simultaneously detect S. agalactiae and serotypes Ia, Ib, and III in a single step. It is of high significance for clinical diagnostic applications and also provides epidemiological data on serotype, information that may be important for vaccine development and other targeted non-antibiotic therapies. Copyright © 2017 Elsevier Inc. All rights reserved.
Besley, Nicholas A
2016-10-11
The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .
Huhn, F.; Schanz, D.; Manovski, P.; Gesemann, S.; Schröder, A.
2018-05-01
Time-resolved volumetric pressure fields are reconstructed from Lagrangian particle tracking with high seeding concentration using the Shake-The-Box algorithm in a perpendicular impinging jet flow with exit velocity U=4 m/s (Re˜ 36,000) and nozzle-plate spacing H/D=5. Helium-filled soap bubbles are used as tracer particles which are illuminated with pulsed LED arrays. A large measurement volume has been covered (cloud of tracked particles in a volume of 54 L, ˜ 180,000 particles). The reconstructed pressure field has been validated against microphone recordings at the wall with high correlation coefficients up to 0.88. In a reduced measurement volume (13 L), dense Lagrangian particle tracking is shown to be feasable up to the maximal possible jet velocity of U=16 m/s.
Energy Technology Data Exchange (ETDEWEB)
Bullock, R. M.; Johnson, P.; Whiston, J. [Imperial Chemical Industries Ltd., Billingham, Co., Durham (United Kingdom)
1967-06-15
The application of radiotracers to determine flow patterns in chemical processes is discussed with particular reference to the derivation of design data from model reactors for translation to large-scale units, the study of operating efficiency and design attainment in established plant and the rapid identification of various types of process malfunction. The requirements governing the selection of tracers for various types of media are considered and an example is given of the testing of the behaviour of a typical tracer before use in a particular large-scale process operating at 250 atm and 200 Degree-Sign C. Information which may be derived from flow patterns is discussed including the determination of mixing parameters, gas hold-up in gas/liquid reactions and the detection of channelling and stagnant regions. Practical results and their interpretation are given in relation to an define hydroformylation reaction system, a process for the conversion of propylene to isopropanol, a moving bed catalyst system for the isomerization of xylenes and a three-stage gas-liquid reaction system. The use of mean residence-time data for the detection of leakage between reaction vessels and a heat interchanger system is given as an example of the identification of process malfunction. (author)
International Nuclear Information System (INIS)
Huang, Ying; Fang, Xia; Xiao, Hai; Bevans, Wesley James; Chen, Genda; Zhou, Zhi
2013-01-01
Steel buildings are subjected to fire hazards during or immediately after a major earthquake. Under combined gravity and thermal loads, they have non-uniformly distributed stiffness and strength, and thus collapse progressively with large deformation. In this study, large-strain optical fiber sensors for high temperature applications and a temperature-dependent finite element model updating method are proposed for accurate prediction of structural behavior in real time. The optical fiber sensors can measure strains up to 10% at approximately 700 °C. Their measurements are in good agreement with those from strain gauges up to 0.5%. In comparison with the experimental results, the proposed model updating method can reduce the predicted strain errors from over 75% to below 20% at 800 °C. The minimum number of sensors in a fire zone that can properly characterize the vertical temperature distribution of heated air due to the gravity effect should be included in the proposed model updating scheme to achieve a predetermined simulation accuracy. (paper)
Real-Time Adaptive Control of a Magnetic Levitation System with a Large Range of Load Disturbance.
Zhang, Zhizhou; Li, Xiaolong
2018-05-11
In an idle light-load or a full-load condition, the change of the load mass of a suspension system is very significant. If the control parameters of conventional control methods remain unchanged, the suspension performance of the control system deteriorates rapidly or even loses stability when the load mass changes in a large range. In this paper, a real-time adaptive control method for a magnetic levitation system with large range of mass changes is proposed. First, the suspension control system model of the maglev train is built up, and the stability of the closed-loop system is analyzed. Then, a fast inner current-loop is used to simplify the design of the suspension control system, and an adaptive control method is put forward to ensure that the system is still in a stable state when the load mass varies in a wide range. Simulations and experiments show that when the load mass of the maglev system varies greatly, the adaptive control method is effective to suspend the system stably with a given displacement.
Mano, Junichi; Hatano, Shuko; Nagatomi, Yasuaki; Futo, Satoshi; Takabatake, Reona; Kitta, Kazumi
2018-03-01
Current genetically modified organism (GMO) detection methods allow for sensitive detection. However, a further increase in sensitivity will enable more efficient testing for large grain samples and reliable testing for processed foods. In this study, we investigated real-time PCR-based GMO detection methods using a large amount of DNA template. We selected target sequences that are commonly introduced into many kinds of GM crops, i.e., 35S promoter and nopaline synthase (NOS) terminator. This makes the newly developed method applicable to a wide range of GMOs, including some unauthorized ones. The estimated LOD of the new method was 0.005% of GM maize events; to the best of our knowledge, this method is the most sensitive among the GM maize detection methods for which the LOD was evaluated in terms of GMO content. A 10-fold increase in the DNA amount as compared with the amount used under common testing conditions gave an approximately 10-fold reduction in the LOD without PCR inhibition. Our method is applicable to various analytical samples, including processed foods. The use of other primers and fluorescence probes would permit highly sensitive detection of various recombinant DNA sequences besides the 35S promoter and NOS terminator.
Directory of Open Access Journals (Sweden)
Durandt, Casper
2016-08-01
Full Text Available Conservative engineering design rules for large serial coupled production processes result in machines having locked-in free time (also called ‘critical downtime’ or ‘maintenance opportunity windows’, which cause idle time if not used. Operators are not able to assess a large production process holistically, and so may not be aware that they form the current bottleneck – or that they have free time available due to interruptions elsewhere. A real-time method is developed to accurately calculate and display free time in location and magnitude, and efficiency improvements are demonstrated in large-scale production runs.
Betancourt, J. L.; Weltzin, J. F.
2013-12-01
As part of an effort to develop an Indicator System for the National Climate Assessment (NCA), the Seasonality and Phenology Indicators Technical Team (SPITT) proposed an integrated, continental-scale framework for understanding and tracking seasonal timing in physical and biological systems. The framework shares several metrics with the EPA's National Climate Change Indicators. The SPITT framework includes a comprehensive suite of national indicators to track conditions, anticipate vulnerabilities, and facilitate intervention or adaptation to the extent possible. Observed, modeled, and forecasted seasonal timing metrics can inform a wide spectrum of decisions on federal, state, and private lands in the U.S., and will be pivotal for international efforts to mitigation and adaptation. Humans use calendars both to understand the natural world and to plan their lives. Although the seasons are familiar concepts, we lack a comprehensive understanding of how variability arises in the timing of seasonal transitions in the atmosphere, and how variability and change translate and propagate through hydrological, ecological and human systems. For example, the contributions of greenhouse warming and natural variability to secular trends in seasonal timing are difficult to disentangle, including earlier spring transitions from winter (strong westerlies) to summer (weak easterlies) patterns of atmospheric circulation; shifts in annual phasing of daily temperature means and extremes; advanced timing of snow and ice melt and soil thaw at higher latitudes and elevations; and earlier start and longer duration of the growing and fire seasons. The SPITT framework aims to relate spatiotemporal variability in surface climate to (1) large-scale modes of natural climate variability and greenhouse gas-driven climatic change, and (2) spatiotemporal variability in hydrological, ecological and human responses and impacts. The hierarchical framework relies on ground and satellite observations
Xie, Yanhua; Weng, Qihao
2017-06-01
Accurate, up-to-date, and consistent information of urban extents is vital for numerous applications central to urban planning, ecosystem management, and environmental assessment and monitoring. However, current large-scale urban extent products are not uniform with respect to definition, spatial resolution, temporal frequency, and thematic representation. This study aimed to enhance, spatiotemporally, time-series DMSP/OLS nighttime light (NTL) data for detecting large-scale urban changes. The enhanced NTL time series from 1992 to 2013 were firstly generated by implementing global inter-calibration, vegetation-based spatial adjustment, and urban archetype-based temporal modification. The dataset was then used for updating and backdating urban changes for the contiguous U.S.A. (CONUS) and China by using the Object-based Urban Thresholding method (i.e., NTL-OUT method, Xie and Weng, 2016b). The results showed that the updated urban extents were reasonably accurate, with city-scale RMSE (root mean square error) of 27 km2 and Kappa of 0.65 for CONUS, and 55 km2 and 0.59 for China, respectively. The backdated urban extents yielded similar accuracy, with RMSE of 23 km2 and Kappa of 0.63 in CONUS, while 60 km2 and 0.60 in China. The accuracy assessment further revealed that the spatial enhancement greatly improved the accuracy of urban updating and backdating by significantly reducing RMSE and slightly increasing Kappa values. The temporal enhancement also reduced RMSE, and improved the spatial consistency between estimated and reference urban extents. Although the utilization of enhanced NTL data successfully detected urban size change, relatively low locational accuracy of the detected urban changes was observed. It is suggested that the proposed methodology would be more effective for updating and backdating global urban maps if further fusion of NTL data with higher spatial resolution imagery was implemented.
A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.
Rutledge, Robert G
2011-03-02
Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.
A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.
Directory of Open Access Journals (Sweden)
Robert G Rutledge
Full Text Available BACKGROUND: Linear regression of efficiency (LRE introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. FINDINGS: Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. CONCLUSIONS: The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
International Nuclear Information System (INIS)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik; Suzuki, Mitsutoshi
2014-01-01
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based on the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided
International Nuclear Information System (INIS)
Sumura, Masahiro; Shigeno, Kazushi; Hyuga, Taiju; Yoneda, Tatsuaki; Shiina, Hiroaki; Igawa, Mikio
2007-01-01
The objective of this study was to determine whether real-time elastography can be used to detect prostate cancer as a relatively non-invasive modality based on the tissue strain value. Seventeen patients underwent real-time elastography in conjunction with digital rectal examination (DRE), conventional gray-scale transrectal ultrasonography (TRUS), color Doppler ultrasonography (CDUS), and magnetic resonance imaging (MRI) prior to radical prostatectomy. The elastogram was compared to findings of conventional modalities and pathological findings of prostatectomy specimens. To obtain the elastogram, compression of the prostate was performed along with a visual indicator on a video screen. Twenty of 27 pathologically confirmed tumors were detected with real-time elastography. The cancer detection rate with real-time elastography was superior to the rates of other modalities and nearly equal to both on the anterior side (75.0%) and the posterior side (73.7%) of the prostate. A higher tumor detection rate for real-time elastography was observed for tumors with a higher Gleason score and larger tumor volume. In our preliminary study, real-time elastography in conjunction with gray-scale TRUS is a non-invasive modality to detect prostate cancer. (author)
Directory of Open Access Journals (Sweden)
Reeves Gillian K
2011-06-01
Full Text Available Abstract Background How overall physical activity relates to specific activities and how reported activity changes over time may influence interpretation of observed associations between physical activity and health. We examine the relationships between various physical activities self-reported at different times in a large cohort study of middle-aged UK women. Methods At recruitment, Million Women Study participants completed a baseline questionnaire including questions on frequency of strenuous and of any physical activity. About 3 years later 589,896 women also completed a follow-up questionnaire reporting the hours they spent on a range of specific activities. Time spent on each activity was used to estimate the associated excess metabolic equivalent hours (MET-hours and this value was compared across categories of physical activity reported at recruitment. Additionally, 18,655 women completed the baseline questionnaire twice, at intervals of up to 4 years; repeatability over time was assessed using the weighted kappa coefficient (κweighted and absolute percentage agreement. Results The average number of hours per week women reported doing specific activities was 14.0 for housework, 4.5 for walking, 3.0 for gardening, 0.2 for cycling, and 1.4 for all strenuous activity. Time spent and the estimated excess MET-hours associated with each activity increased with increasing frequency of any or strenuous physical activity reported at baseline (tests for trend, P weighted = 0.71 for questionnaires administered less than 6 months apart, and 52% (κweighted = 0.51 for questionnaires more than 2 years apart. Corresponding values for any physical activity were 57% (κweighted = 0.67 and 47% (κweighted = 0.58. Conclusions In this cohort, responses to simple questions on the frequency of any physical activity and of strenuous activity asked at baseline were associated with hours spent on specific activities and the associated estimated excess MET
3-D time-dependent numerical model of flow patterns within a large-scale Czochralski system
Nam, Phil-Ouk; O, Sang-Kun; Yi, Kyung-Woo
2008-04-01
Silicon single crystals grown through the Czochralski (Cz) method have increased in size to 300 mm, resulting in the use of larger crucibles. The objective of this study is to investigate the continuous Cz method in a large crucible (800 mm), which is performed by inserting a polycrystalline silicon rod into the melt. The numerical model is based on a time-dependent and three-dimensional standard k- ɛ turbulent model using the analytical software package CFD-ACE+, version 2007. Wood's metal melt, which has a low melting point ( Tm=70 °C), was used as the modeling fluid. Crystal rotation given in the clockwise direction with rotation rates varying from 0 to 15 rpm, while the crucible was rotated counter-clockwise, with rotation rates between 0 and 3 rpm. The results show that asymmetrical phenomena of fluid flow arise as results of crystal and crucible rotation, and that these phenomena move with the passage of time. Near the crystal, the flow moves towards the crucible at the pole of the asymmetrical phenomena. Away from the poles, a vortex begins to form, which is strongly pronounced in the region between the poles.
Time-resolved assessment of collateral flow using 4D CT angiography in large-vessel occlusion stroke
International Nuclear Information System (INIS)
Froelich, Andreas M.J.; Wolff, Sarah Lena; Psychogios, Marios N.; Schramm, Ramona; Knauth, Michael; Schramm, Peter; Klotz, Ernst; Wasser, Katrin
2014-01-01
In acute stroke patients with large vessel occlusion, collateral blood flow affects tissue fate and patient outcome. The visibility of collaterals on computed tomography angiography (CTA) strongly depends on the acquisition phase, but the optimal time point for collateral imaging is unknown. We analysed collaterals in a time-resolved fashion using four-dimensional (4D) CTA in 82 endovascularly treated stroke patients, aiming to determine which acquisition phase best depicts collaterals and predicts outcome. Early, peak and late phases as well as temporally fused maximum intensity projections (tMIP) were graded using a semiquantitative regional leptomeningeal collateral score, compared with conventional single-phase CTA and correlated with functional outcome. The total extent of collateral flow was best visualised on tMIP. Collateral scores were significantly lower on early and peak phase as well as on single-phase CTA. Collateral grade was associated with favourable functional outcome and the strength of this relationship increased from earlier to later phases, with collaterals on tMIP showing the strongest correlation with outcome. Temporally fused tMIP images provide the best depiction of collateral flow. Our findings suggest that the total extent of collateral flow, rather than the velocity of collateral filling, best predicts clinical outcome. (orig.)
Directory of Open Access Journals (Sweden)
Jyh-Woei Lin
2011-01-01
Full Text Available The goal of this study is to determine whether principal component analysis (PCA can be used to process latitude-time ionospheric TEC data on a monthly basis to identify earthquake associated TEC anomalies. PCA is applied to latitude-time (mean-of-a-month ionospheric total electron content (TEC records collected from the Japan GEONET network to detect TEC anomalies associated with 18 earthquakes in Japan (M≥6.0 from 2000 to 2005. According to the results, PCA was able to discriminate clear TEC anomalies in the months when all 18 earthquakes occurred. After reviewing months when no M≥6.0 earthquakes occurred but geomagnetic storm activity was present, it is possible that the maximal principal eigenvalues PCA returned for these 18 earthquakes indicate earthquake associated TEC anomalies. Previously PCA has been used to discriminate earthquake-associated TEC anomalies recognized by other researchers, who found that statistical association between large earthquakes and TEC anomalies could be established in the 5 days before earthquake nucleation; however, since PCA uses the characteristics of principal eigenvalues to determine earthquake related TEC anomalies, it is possible to show that such anomalies existed earlier than this 5-day statistical window.